id
stringlengths
10
10
title
stringlengths
19
145
abstract
stringlengths
273
1.91k
full_text
dict
qas
dict
figures_and_tables
dict
question
sequence
retrieval_gt
sequence
answer_gt
sequence
__index_level_0__
int64
0
887
1705.07368
Mixed Membership Word Embeddings for Computational Social Science
Word embeddings improve the performance of NLP systems by revealing the hidden structural relationships between words. Despite their success in many applications, word embeddings have seen very little use in computational social science NLP tasks, presumably due to their reliance on big data, and to a lack of interpretability. I propose a probabilistic model-based word embedding method which can recover interpretable embeddings, without big data. The key insight is to leverage mixed membership modeling, in which global representations are shared, but individual entities (i.e. dictionary words) are free to use these representations to uniquely differing degrees. I show how to train the model using a combination of state-of-the-art training techniques for word embeddings and topic models. The experimental results show an improvement in predictive language modeling of up to 63% in MRR over the skip-gram, and demonstrate that the representations are beneficial for supervised learning. I illustrate the interpretability of the models with computational social science case studies on State of the Union addresses and NIPS articles.
{ "paragraphs": [ [ "Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations.", "Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important.", "Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts.", "In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data." ], [ "In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings.", "Later authors have noted that these word embeddings are useful for semantic representations of words, independently of whether a full joint probabilistic language model is learned, and that alternative training schemes can be beneficial for learning the embeddings. In particular, BIBREF0 , BIBREF1 proposed the skip-gram model, which inverts the language model prediction task and aims to predict the context given an input word. The skip-gram model is a log-bilinear discriminative probabilistic classifier parameterized by “input” word embedding vectors INLINEFORM0 for the input words INLINEFORM1 , and “output” word embedding vectors INLINEFORM2 for context words INLINEFORM3 , as shown in Table TABREF2 , top-left.", "Topic models such as latent Dirichlet allocation (LDA) BIBREF7 are another class of probabilistic language models that have been used for semantic representation BIBREF6 . A straightforward way to model text corpora is via unsupervised multinomial naive Bayes, in which a latent cluster assignment for each document selects a multinomial distribution over words, referred to as a topic, with which the documents' words are assumed to be generated. LDA topic models improve over naive Bayes by using a mixed membership model, in which the assumption that all words in a document INLINEFORM0 belong to the same topic is relaxed, and replaced with a distribution over topics INLINEFORM1 . In the model's assumed generative process, for each word INLINEFORM2 in document INLINEFORM3 , a topic assignment INLINEFORM4 is drawn via INLINEFORM5 , then the word is drawn from the chosen topic INLINEFORM6 . The mixed membership formalism provides a useful compromise between model flexibility and statistical efficiency: the INLINEFORM7 topics INLINEFORM8 are shared across all documents, thereby sharing statistical strength, but each document is free to use the topics to its own unique degree. Bayesian inference further aids data efficiency, as uncertainty over INLINEFORM9 can be managed for shorter documents. Some recent papers have aimed to combine topic models and word embeddings BIBREF21 , BIBREF22 , but they do not aim to address the small data problem for computational social science, which I focus on here. I provide a more detailed discussion of related work in the supplementary." ], [ "To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram.", "As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0 ", " We can expect that the resulting mixed membership word embeddings are beneficial in the small-to-medium data regime for the following reasons:", "Of course, the model also requires some new parameters to be learned, namely the mixed membership proportions INLINEFORM0 . Based on topic modeling, I hypothesized that with care, these added parameters need not adversely affect performance in the small-medium data regime, for two reasons: 1) we can use a Bayesian approach to effectively manage uncertainty in them, and to marginalize them out, which prevents them being a bottleneck during training; and 2) at test time, using the posterior for INLINEFORM1 given the context, instead of the “prior” INLINEFORM2 , mitigates the impact of uncertainty in INLINEFORM3 due to limited training data: DISPLAYFORM0 ", " To obtain a vector for a word type INLINEFORM0 , we can use the prior mean, INLINEFORM1 . For a word token INLINEFORM2 , we can leverage its context via the posterior mean, INLINEFORM3 . These embeddings are convex combinations of topic vectors (see Figure FIGREF23 for an example). With fewer vectors than words, some model capacity is lost, but the flexibility of the mixed membership representation allows the model to compensate. When the number of shared vectors equals the number of words, the mixed membership skip-gram is strictly more representationally powerful than the skip-gram. With more vectors than words, we can expect that the increased representational power would be beneficial in the big data regime. As this is not my goal, I leave this for future work." ], [ "The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research." ], [ "I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits.", "The results are shown in Table TABREF25 . I compared to a word frequency baseline, the skip-gram (SG), and Tomas Mikolov/Google's vectors trained on Google News, INLINEFORM0 billion, via CBOW. Simulated annealing was performed for 1,000 iterations, NCE was performed for 1 million minibatches of size 128, and 128-dimensional embeddings were used (300 for Google). I used INLINEFORM1 for NIPS, INLINEFORM2 for state of the Union, and INLINEFORM3 for the two smaller datasets. Methods were able to leverage the remainder of the context, either by adding the context's vectors, or via the posterior (Equation EQREF22 ), which helped for all methods except the naive skip-gram. We can identify several noteworthy findings. First, the generic big data vectors (Google+context) were outperformed by the skip-gram on 3 out of 4 datasets (and by the skip-gram topic model on the other), by a large margin, indicating that domain-specific embeddings are often important. Second, the mixed membership models, using posterior inference, beat or matched their naive Bayes counterparts, for both the word embedding models and the topic models. As hypothesized, posterior inference on INLINEFORM4 at test time was important for good performance. Finally, the topic models beat their corresponding word embedding models at prediction. I therefore recommend the use of our MMSG topic model variant for predictive language modeling in the small data regime.", "I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method.", "Across the three datasets, several clear trends emerged (Table TABREF26 ). First, the generic Google vectors were consistently and substantially outperformed in classification performance by the skipgram (SG) and MMSG vectors, highlighting the importance of corpus-specific embeddings. Second, despite the MMSG's superior performance at language modeling on small datasets, the SG features outperformed the MMSG's at the document categorization task. By encoding vectors at the topic level instead of the word level, the MMSG loses word level resolution in the embeddings, which turned out to be valuable for these particular classification tasks. We are not, however, restricted to use only one type of embedding to construct features for classification. Interestingly, when the SG and MMSG features were concatenated (SG+MMSG), this improved classification performance over these vectors individually. This suggests that the topic-level MMSG vectors and word-level SG vectors encode complementary information, and both are beneficial for performance. Finally, further concatenating the generic Google vectors' features (SG+MMSG+Google) improved performance again, despite the fact that these vectors performed poorly on their own. It should be noted that tf-idf, which is notoriously effective for document categorization, outperformed the embedding methods on these datasets.", "I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task." ], [ "I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”).", "On the NIPS corpus, for the input word “Bayesian” (Table ), the naive Bayes and skip-gram models learned a topic with words that refer to Bayesian networks, probabilistic models, and neural networks. The mixed membership models are able to separate this into more coherent and specific topics including Bayesian modeling, Bayesian training of neural networks (for which Sir David MacKay was a strong proponent, and Andreas Weigend wrote an influential early paper), and Monte Carlo methods. By performing the additive composition of word vectors, which we obtain by finding the prior mean vector for each word type INLINEFORM0 , INLINEFORM1 (and then normalizing), we obtain relevant topics INLINEFORM2 as nearest neighbors (Figure FIGREF28 ). Similarly, we find that the additive composition of topic and word vectors works correctly: INLINEFORM3 , and INLINEFORM4 .", "The INLINEFORM0 -SNE visualization of NIPS documents (Figure FIGREF28 ) shows some temporal clustering patterns (blue documents are more recent, red documents are older, and gray points are topics). I provide a more detailed case study on NIPS in the supplementary material." ], [ "I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues.", "Acknowledgements", "I thank Eric Nalisnick and Padhraic Smyth for many helpful discussions." ], [ "]" ], [ "In this supplementary document, we discuss related work in the literature and its relation to our proposed methods, provide a case study on NIPS articles, and derive the collapsed Gibbs sampling update for the MMSGTM, which we leverage when training the MMSG." ], [ "The Gaussian LDA model of BIBREF21 improves the performance of topic modeling by leveraging the semantic information encoded in word embeddings. Gaussian LDA modifies the generative process of LDA such that each topic is assumed to generate the vectors via its own Gaussian distribution. Similarly to our MMSG model, in Gaussian LDA each topic is encoded with a vector, in this case the mean of the Gaussian. It takes pre-trained word embeddings as input, rather than learning the embeddings from data within the same model, and does not aim to perform word embedding.", "The topical word embedding (TWE) models of BIBREF22 reverse this, as they take LDA topic assignments of words as input, and aim to use them to improve the resultant word embeddings. The authors propose three variants, each of which modifies the skip-gram training objective to use LDA topic assignments together with words. In the best performing variant, called TWE-1, a standard skip-gram word embedding model is trained independently with another skip-gram variant, which tries to predict context words given the input word's topic assignment. The skip-gram embedding and the topic embeddings are concatenated to form the final embedding.", "At test time, a distribution over topics for the word given the context, INLINEFORM0 is estimated according to the topic counts over the other context words. Using this as a prior, a posterior over topics given both the input word and the context is calculated, and similarities between pairs of words (with their contexts) are averaged over this posterior, in a procedure inspired by those used by BIBREF43 , BIBREF36 . The primary similarity to our MMSG approach is the use of a training algorithm involving the prediction of context words, given a topic. Our method does this as part of an overall model-based inference procedure, and we learn mixed membership proportions INLINEFORM1 rather than using empirical counts as the prior over topics for a word token. In accordance with the skip-gram's prediction model, we are thus able to model the context words in the data likelihood term when computing the posterior probability of the topic assignment. TWE-1 requires that topic assignments are available at test time. It provides a mechanism to predict contextual similarity, but not to predict held-out context words, so we are unable to compare to it in our experiments.", "Other neurally-inspired topic models include replicated softmax BIBREF34 , and its successor, DocNADE BIBREF37 . Replicated softmax extends the restricted Boltzmann machine to handle multinomial counts for document modeling. DocNADE builds on the ideas of replicated softmax, but uses the NADE architecture, where observations (i.e. words) are modeled sequentially given the previous observations." ], [ "Multi-prototype embeddings models are another relevant line of work. These models address lexical ambiguity by assigning multiple vectors to each word type, each corresponding to a different meaning of that word. BIBREF43 propose to cluster the occurrences of each word type, based on features extracted from its context. Embeddings are then learned for each cluster. BIBREF36 apply a similar approach, but they use initial single-prototype word embeddings to provide the features used for clustering. These clustering methods have some resemblance to our topic model pre-clustering step, although their clustering is applied within instances of a given word type, rather than globally across all word types, as in our methods. This results in models with more vectors than words, while we aim to find fewer vectors than words, to reduce the model's complexity for small datasets. Rather than employing an off-the-shelf clustering algorithm and then applying an unrelated embedding model to its output, our approach aims to perform model-based clustering within an overall joint model of topic/cluster assignments and word vectors.", "Perhaps the most similar model to ours in the literature is the probabilistic multi-prototype embedding model of BIBREF45 , who treat the prototype assignment of a word as a latent variable, assumed drawn from a mixture over prototypes for each word. The embeddings are then trained using EM. Our MMSG model can be understood as the mixed membership version of this model, in which the prototypes (vectors) are shared across all word types, and each word type has its own mixed membership proportions across the shared prototypes. While a similar EM algorithm can be applied to the MMSG, the E-step is much more expensive, as we typically desire many more shared vectors (often in the thousands) than we would prototypes per a single word type (Tian et al. use ten in their experiments). We use the Metropolis-Hastings-Walker algorithm with the topic model reparameterization of our model in order to address this by efficiently pre-solving the E-step." ], [ "Mixed membership modeling is a flexible alternative to traditional clustering, in which each data point is assigned to a single cluster. Instead, mixed membership models posit that individual entities are associated with multiple underlying clusters, to differing degrees, as encoded by a mixed membership vector that sums to one across the clusters BIBREF28 , BIBREF26 . These mixed membership proportions are generally used to model lower-level grouped data, such as the words inside a document. Each lower-level data point inside a group is assumed to be assigned to one of the shared, global clusters according to the group-level membership proportions. Thus, a mixed membership model consists of a mixture model for each group, which share common mixture component parameters, but with differing mixture proportions.", "This formalism has lead to probabilistic models for a variety of applications, including medical diagnosis BIBREF39 , population genetics BIBREF42 , survey analysis BIBREF29 , computer vision BIBREF27 , BIBREF30 , text documents BIBREF35 , BIBREF7 , and social network analysis BIBREF25 . Nonparametric Bayesian extensions, in which the number of underlying clusters is learned from data via Bayesian inference, have also been proposed BIBREF44 . In this work, dictionary words are assigned a mixed membership distribution over a set of shared latent vector space embeddings. Each instantiation of a dictionary word (an “input” word) is assigned to one of the shared embeddings based on its dictionary word's membership vector. The words in its context (“output” words) are assumed to be drawn based on the chosen embedding." ], [ "In Figure FIGREF33 , we show a zoomed in INLINEFORM0 -SNE visualization of NIPS document embeddings. We can see regions of the space corresponding to learning algorithms (bottom), data space and latent space (center), training neural networks (top), and nearest neighbors (bottom-left). We also visualized the authors' embeddings via INLINEFORM1 -SNE (Figure FIGREF34 ). We find regions of latent space for reinforcement learning authors (left: “state, action,...,” Singh, Barto,Sutton), probabilistic methods (right: “mixture, model,” “monte, carlo,” Bishop, Williams, Barber, Opper, Jordan, Ghahramani, Tresp, Smyth), and evaluation (top-right: “results, performance, experiments,...”)." ], [ "Let INLINEFORM0 be the number of output words in the INLINEFORM1 th context, let INLINEFORM2 be those output words, and let INLINEFORM3 be the input words other that INLINEFORM4 (similarly, topic assignments INLINEFORM5 and output words INLINEFORM6 ). Then the collapsed Gibbs update samples from the conditional distribution INLINEFORM7 ", " We recognize the first integral as the mean of a Dirichlet distribution which we obtain via conjugacy: INLINEFORM0 ", " The above can also be understood as the probability of the next ball drawn from a multivariate Polya urn model, also known as the Dirichlet-compound multinomial distribution, arising from the posterior predictive distribution of a discrete likelihood with a Dirichlet prior. We will need the full form of such a distribution to analyze the second integral. Once again leveraging conjugacy, we have: INLINEFORM0 ", " INLINEFORM0 ", " where INLINEFORM0 is the number of times that output word INLINEFORM1 occurs in the INLINEFORM2 th context, since the final integral is over the full support of a Dirichlet distribution, which integrates to one. Eliminating terms that aren't affected by the INLINEFORM3 assignment, the above is INLINEFORM4 ", " where we have used the fact that INLINEFORM0 for any INLINEFORM1 , and integer INLINEFORM2 . We can interpret this as the probability of drawing the context words under the multivariate Polya urn model, in which the number of “colored balls” (word counts plus prior counts) is increased by one each time a certain color (word) is selected. In other words, in each step, corresponding to the selection of each context word, we draw a ball from the urn, then put it back, along with another ball of the same color. The INLINEFORM3 and INLINEFORM4 terms reflect that the counts have been changed by adding these extra balls into the urn in each step. The second to last equation shows that this process is exchangeable: it does not matter which order the balls were drawn in when determining the probability of the sequence. Multiplying this with the term from the first integral, calculated earlier, gives us the final form of the update equation, INLINEFORM5 ", "" ] ], "section_name": [ "Introduction", "Background", "The Mixed Membership Skip-Gram", "Experimental Results", "Quantitative Experiments", "Computational Social Science Case Studies: State of the Union and NIPS", "Conclusion", "Supplementary Material", "Related Work", "Topic Modeling and Word Embeddings", "Multi-Prototype Embedding Models", "Mixed Membership Modeling", "Case Study on NIPS", "Derivation of the Collapsed Gibbs Update" ] }
{ "answers": [ { "annotation_id": [ "638a79523ddc482d96be422ad091c20c92ccf7d9" ], "answer": [ { "evidence": [ "In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data.", "I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method.", "I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task." ], "extractive_spans": [ "document categorization", "regression tasks" ], "free_form_answer": "", "highlighted_evidence": [ "In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest.", "I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I", "I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "3d2d234552afa12b52e6c2caef65925286d400cc" ], "answer": [ { "evidence": [ "I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits." ], "extractive_spans": [ "mean reciprocal rank" ], "free_form_answer": "", "highlighted_evidence": [ "I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "acb298246a4b3036c79e8e6e9618762cc82683de" ], "answer": [ { "evidence": [ "To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram.", "As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0" ], "extractive_spans": [ " skip-gram", "LDA" ], "free_form_answer": "", "highlighted_evidence": [ "To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 ", "We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right).", "As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "321ae98d153c58df5e886904482062fd2717be2f" ], "answer": [ { "evidence": [ "Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important.", "I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues." ], "extractive_spans": [], "free_form_answer": "Training embeddings from small-corpora can increase the performance of some tasks", "highlighted_evidence": [ "Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting.", " Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15", " have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "28f72bd8e3448e8b49513cae46bd426476b54693" ], "answer": [ { "evidence": [ "I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”)." ], "extractive_spans": [], "free_form_answer": "Visualization of State of the union addresses", "highlighted_evidence": [ "I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "What supervised learning tasks are attempted with these representations?", "What is MRR?", "Which techniques for word embeddings and topic models are used?", "Why is big data not appropriate for this task?", "What is an example of a computational social science NLP task?" ], "question_id": [ "4c71ed7d30ee44cf85ffbd7756b985e32e8e07da", "1949d84653562fa9e83413796ae55980ab7318f2", "7ee660927e2b202376849e489faa7341518adaf9", "f6380c60e2eb32cb3a9d3bca17cf4dc5ae584eca", "c7d99e66c4ab555fe3d616b15a5048f3fe1f3f0e" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Most similar words to “learning,” based on word embeddings trained on NIPS articles, and on the large generic Google News corpus (Mikolov et al., 2013a,b).", "Table 2: “Generative” models. Identifying the skip-gram (top-left)’s word distributions with topics yields analogous topic models (right), and mixed membership modeling extensions (bottom).", "Figure 1: Mixed membership word embeddings v̄w for word type w (prior) and v̂wi for word token wi (posterior), are convex combinations of topic embeddings vk.", "Table 3: SG = skip-gram, TM = topic model, MM = mixed membership.", "Table 4: Mean reciprocal rank of held-out context words. SG = skip-gram, TM = topic model, MM = mixed membership. Bold indicates statistically significant improvement versus SG.", "Table 5: Document categorization (top, classification accuracy, larger is better), and predicting the year of State of the Union addresses (bottom, RMSE, LOO cross-validation, smaller is better).", "Figure 2: State of the Union (SOTU) addresses. Colored circles are t-SNE projected embeddings for SOTU addresses. Color = party (red = GOP, blue = Democrats, light green = Whigs, pink = Democratic-Republicans, orange = Federalists (John Adams), green = George Washington), size = recency (year, see dates in green). Gray circles correspond to topics.", "Figure 3: Left: Vector compositionality examples, NIPS. Right: NIPS documents/ topics, t-SNE.", "Figure 4: NIPS documents/topics, t-SNE, zoomed in. Blue/red = more recent/older, gray = topics." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Figure1-1.png", "5-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "8-Figure2-1.png", "8-Figure3-1.png", "12-Figure4-1.png" ] }
[ "Why is big data not appropriate for this task?", "What is an example of a computational social science NLP task?" ]
[ [ "1705.07368-Introduction-1", "1705.07368-Conclusion-0" ], [ "1705.07368-Computational Social Science Case Studies: State of the Union and NIPS-0" ] ]
[ "Training embeddings from small-corpora can increase the performance of some tasks", "Visualization of State of the union addresses" ]
383
2001.05970
#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media
Recently, the emergence of the #MeToo trend on social media has empowered thousands of people to share their own sexual harassment experiences. This viral trend, in conjunction with the massive personal information and content available on Twitter, presents a promising opportunity to extract data driven insights to complement the ongoing survey based studies about sexual harassment in college. In this paper, we analyze the influence of the #MeToo trend on a pool of college followers. The results show that the majority of topics embedded in those #MeToo tweets detail sexual harassment stories, and there exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions. Furthermore, we discover the outstanding sentiments of the #MeToo tweets using deep semantic meaning representations and their implications on the affected users experiencing different types of sexual harassment. We hope this study can raise further awareness regarding sexual misconduct in academia.
{ "paragraphs": [ [ "Sexual harassment is defined as \"bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors.\" In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body.", "Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the \"MeToo\" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter.", "Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives." ], [ "Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5.", "The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset.", "There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12." ], [ "In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ], [ "We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. \"reallyyy\"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser." ], [ "The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939." ], [ "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ], [ "Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14." ], [ "In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms." ], [ "Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet).", "Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence \"He harassed me,\" where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called \"theme\" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\\%$ F1 score), so that we can have a solid ground for further analysis." ], [ "In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example \"He harassed me.\":", "${Sentiment(\\textrm {verb}) -}$: something negative happened to the writer.", "$Sentiment(\\textrm {affected}) -$: the writer (affected) most likely feels negative about the event.", "$Perspective(\\textrm {affected} \\rightarrow \\textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event.", "$Perspective(\\textrm {reader} \\rightarrow \\textrm {affected})-$: the reader most likely view the agent as the antagonist.", "$Perspective(\\textrm {affected} \\rightarrow \\textrm {affected})+$: the reader most likely feels sympathetic towards the writer.", "In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1).", "where $\\mathcal {I}=\\mathbf {1_{w \\in \\mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\\mathcal {A}$, $\\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results." ], [ "The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results." ], [ "Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the \"Yes means yes\" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny." ], [ "We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences.", "In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral." ], [ "Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19.", "Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users." ], [ "In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues.", "Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems.", "Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment." ] ], "section_name": [ "Introduction", "Related Work", "Dataset ::: Data Collection", "Dataset ::: Text Preprocessing", "Dataset ::: College Metadata", "Methodology ::: Regression Analysis", "Methodology ::: Labeling Sexual Harassment", "Methodology ::: Topic Modeling on #MeToo Tweets", "Methodology ::: Semantic Parsing with TRIPS", "Methodology ::: Connotation Frames and Sentiment Analysis", "Experimental Results ::: Topical Themes of #MeToo Tweets", "Experimental Results ::: Regression Result", "Experimental Results ::: Event-Entity Sentiment Analysis", "Experimental Results ::: Limitations and Ethical Implications", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "5a280f6f2ee2fb369c1b5ff5a59c638763efefd6" ], "answer": [ { "evidence": [ "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ], "extractive_spans": [], "free_form_answer": "Northeast U.S, South U.S., West U.S. and Midwest U.S.", "highlighted_evidence": [ "Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "757e950d7571ea1e8002f26ac4cd4dbeacb0d2e2" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Linear regression results.", "We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ], "extractive_spans": [], "free_form_answer": "0.9098 correlation", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Linear regression results.", "We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "98539e4210985c0822dfbc0d0d9368a4e68d383f" ], "answer": [ { "evidence": [ "In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms." ], "extractive_spans": [], "free_form_answer": "Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus", "highlighted_evidence": [ "In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users.", "Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "9588e59be6b2ecf4cbc538327f00b6c4f31bdd1c" ], "answer": [ { "evidence": [ "In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ], "extractive_spans": [ "60,000 " ], "free_form_answer": "", "highlighted_evidence": [ "We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "5fc94f529a5e1285bdc11328ada2235724b988ba" ], "answer": [ { "evidence": [ "Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the \"Yes means yes\" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny." ], "extractive_spans": [], "free_form_answer": "Northeast U.S., West U.S. and South U.S.", "highlighted_evidence": [ "Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "29250e7e5d792b0bcf905985619d7a451f8ee23d" ], "answer": [ { "evidence": [ "In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ], "extractive_spans": [ "51,104" ], "free_form_answer": "", "highlighted_evidence": [ "We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five", "", "", "" ], "paper_read": [ "no", "no", "no", "", "", "" ], "question": [ "Which major geographical regions are studied?", "How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]?", "How are the topics embedded in the #MeToo tweets extracted?", "How many tweets are explored in this paper?", "Which geographical regions correlate to the trend?", "How many followers did they analyze?" ], "question_id": [ "5dc1aca619323ea0d4717d1f825606b2b7c21f01", "dd5c9a370652f6550b4fd13e2ac317eaf90973a8", "39c78924df095c92e058ffa5a779de597e8c43f4", "a95188a0f35d3cb3ca70ae1527d57ac61710afa3", "a1557ec0f3deb1e4cd1e68f4880dcecda55656dd", "096f5c59f43f49cab1ef37126341c78f272c0e26" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "twitter", "twitter", "twitter", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "", "", "" ] }
{ "caption": [ "Figure 1: The meaning representation of the example sentence ”He harassed me.” in TRIPS LF, the Ontology types of the words are indicated by ”:*” and the role-argument relations between them are denoted by named arcs.", "Table 1: Top 5 topics from all #MeToo Tweets from 51,104 college followers.", "Table 2: Linear regression results.", "Table 3: Semantic sentiment results." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "4-Table2-1.png", "4-Table3-1.png" ] }
[ "Which major geographical regions are studied?", "How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]?", "How are the topics embedded in the #MeToo tweets extracted?", "Which geographical regions correlate to the trend?" ]
[ [ "2001.05970-Methodology ::: Regression Analysis-0" ], [ "2001.05970-Methodology ::: Regression Analysis-0", "2001.05970-4-Table2-1.png" ], [ "2001.05970-Methodology ::: Topic Modeling on #MeToo Tweets-0" ], [ "2001.05970-Experimental Results ::: Regression Result-0" ] ]
[ "Northeast U.S, South U.S., West U.S. and Midwest U.S.", "0.9098 correlation", "Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus", "Northeast U.S., West U.S. and South U.S." ]
386
1706.04815
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
In this paper, we present a novel approach to machine reading comprehension for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a question with exact text spans in a passage, the MS-MARCO dataset defines the task as answering a question from multiple passages and the words in the answer are not necessary in the passages. We therefore develop an extraction-then-synthesis framework to synthesize answers from extraction results. Specifically, the answer extraction model is first employed to predict the most important sub-spans from the passage as evidence, and the answer synthesis model takes the evidence as additional features along with the question and passage to further elaborate the final answers. We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages. The answer synthesis model is based on the sequence-to-sequence neural networks with extracted evidences as features. Experiments show that our extraction-then-synthesis method outperforms state-of-the-art methods.
{ "paragraphs": [ [ "Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dataset (SQuAD) BIBREF0 and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) BIBREF1 provides the large-scale manually created datasets for model training and testing of machine learning (especially deep learning) algorithms for this task. There are two main differences in existing machine reading comprehension datasets. First, the SQuAD dataset constrains the answer to be an exact sub-span in the passage, while words in the answer are not necessary in the passages in the MS-MARCO dataset. Second, the SQuAD dataset only has one passage for a question, while the MS-MARCO dataset contains multiple passages.", "Existing methods for the MS-MARCO dataset usually follow the extraction based approach for single passage in the SQuAD dataset. It formulates the task as predicting the start and end positions of the answer in the passage. However, as defined in the MS-MARCO dataset, the answer may come from multiple spans, and the system needs to elaborate the answer using words in the passages and words from the questions as well as words that cannot be found in the passages or questions.", "Table 1 shows several examples from the MS-MARCO dataset. Except in the first example the answer is an exact text span in the passage, in other examples the answers need to be synthesized or generated from the question and passage. In the second example the answer consists of multiple text spans (hereafter evidence snippets) from the passage. In the third example, the answer contains words from the question. In the fourth example, the answer has words that cannot be found in the passages or question. In the last example, all words are not in the passages or questions.", "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers.", "Specifically, we develop the answer extraction model with state-of-the-art attention based neural networks which predict the start and end positions of evidence snippets. As multiple passages are provided for each question in the MS-MARCO dataset, we propose incorporating passage ranking as an additional task to improve the results of evidence extraction under a multi-task learning framework. We use the bidirectional recurrent neural networks (RNN) for the word-level representation, and then apply the attention mechanism BIBREF2 to incorporate matching information from question to passage at the word level. Next, we predict start and end positions of the evidence snippet by pointer networks BIBREF3 . Moreover, we aggregate the word-level matching information of each passage using the attention pooling, and use the passage-level representation to rank all candidate passages as an additional task. For the answer synthesis, we apply the sequence-to-sequence model to synthesize the final answer based on the extracted evidence. The question and passage are encoded by a bi-directional RNN in which the start and end positions of extracted snippet are labeled as features. We combine the question and passage information in the encoding part to initialize the attention-equipped decoder to generate the answer.", "We conduct experiments on the MS-MARCO dataset. The results show our extraction-then-synthesis framework outperforms our baselines and all other existing methods in terms of ROUGE-L and BLEU-1.", "Our contributions can be summarized as follows:" ], [ "Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. BIBREF4 release MCTest whose goal is to select the best answer from four options given the question and the passage. CNN/Daily-Mail BIBREF5 and CBT BIBREF6 are the cloze-style datasets in which the goal is to predict the missing word (often a named entity) in a passage. Different from above datasets, the SQuAD dataset BIBREF0 whose answer can be much longer phrase is more challenging. The answer in SQuAD is a segment of text, or span, from the corresponding reading passage. Similar to the SQuAD, MS-MARCO BIBREF1 is the reading comprehension dataset which aims to answer the question given a set of passages. The answer in MS-MARCO is generated by human after reading all related passages and not necessarily sub-spans of the passages.", "To the best of our knowledge, the existing works on the MS-MARCO dataset follow their methods on the SQuAD. BIBREF7 combine match-LSTM and pointer networks to produce the boundary of the answer. BIBREF8 and BIBREF9 employ variant co-attention mechanism to match the question and passage mutually. BIBREF8 propose a dynamic pointer network to iteratively infer the answer. BIBREF10 apply an additional gate to the attention-based recurrent networks and propose a self-matching mechanism for aggregating evidence from the whole passage, which achieves the state-of-the-art result on SQuAD dataset. Other works which only focus on the SQuAD dataset may also be applied on the MS-MARCO dataset BIBREF11 , BIBREF12 , BIBREF13 .", "The sequence-to-sequence model is widely-used in many tasks such as machine translation BIBREF14 , parsing BIBREF15 , response generation BIBREF16 , and summarization generation BIBREF17 . We use it to generate the synthetic answer with the start and end positions of the evidence snippet as features." ], [ "Following the overview in Figure 1 , our approach consists of two parts as evidence extraction and answer synthesis. The two parts are trained in two stages. The evidence extraction part aims to extract evidence snippets related to the question and passage. The answer synthesis part aims to generate the answer based on the extracted evidence snippets. We propose a multi-task learning framework for the evidence extraction shown in Figure 15 , and use the sequence-to-sequence model with additional features of the start and end positions of the evidence snippet for the answer synthesis shown in Figure 3 ." ], [ "We use Gated Recurrent Unit (GRU) BIBREF18 instead of basic RNN. Equation 8 describes the mathematical model of the GRU. $r_t$ and $z_t$ are the gates and $h_t$ is the hidden state. ", "$$z_t &= \\sigma (W_{hz} h_{t-1} + W_{xz} x_t + b_z)\\nonumber \\\\\nr_t &= \\sigma (W_{hr} h_{t-1} + W_{xr} x_t + b_r)\\nonumber \\\\\n\\hat{h_t} &= \\Phi (W_h (r_t \\odot h_{t-1}) + W_x x_t + b)\\nonumber \\\\\nh_t &= (1-z_t)\\odot h_{t-1} + z_t \\odot \\hat{h_t}$$ (Eq. 8) " ], [ "We propose a multi-task learning framework for evidence extraction. Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. To this end, we propose improving text span prediction with passage ranking. Specifically, as shown in Figure 2 , in addition to predicting a text span, we apply another task to rank candidate passages with the passage-level representation.", "Consider a question Q = $\\lbrace w_t^Q\\rbrace _{t=1}^m$ and a passage P = $\\lbrace w_t^P\\rbrace _{t=1}^n$ , we first convert the words to their respective word-level embeddings and character-level embeddings. The character-level embeddings are generated by taking the final hidden states of a bi-directional GRU applied to embeddings of characters in the token. We then use a bi-directional GRU to produce new representation $u^Q_1, \\dots , u^Q_m$ and $u^P_1, \\dots , u^P_n$ of all words in the question and passage respectively: ", "$$u_t^Q = \\mathrm {BiGRU}_Q(u_{t - 1}^Q, [e_t^Q,char_t^Q]) \\nonumber \\\\\nu_t^P = \\mathrm {BiGRU}_P(u_{t - 1}^P, [e_t^P,char_t^P])$$ (Eq. 11) ", " Given question and passage representation $\\lbrace u_t^Q\\rbrace _{t=1}^m$ and $\\lbrace u_t^P\\rbrace _{t=1}^n$ , BIBREF2 propose generating sentence-pair representation $\\lbrace v_t^P\\rbrace _{t=1}^n$ via soft-alignment of words in the question and passage as follows: ", "$$v_t^P = \\mathrm {GRU} (v_{t-1}^P, c^Q_t)$$ (Eq. 12) ", "where $c^Q_t=att(u^Q, [u_t^P, v_{t-1}^P])$ is an attention-pooling vector of the whole question ( $u^Q$ ): ", "$$s_j^t &= \\mathrm {v}^\\mathrm {T}\\mathrm {tanh}(W_u^Q u_j^Q + W_u^P u_t^P) \\nonumber \\\\\na_i^t &= \\mathrm {exp}(s_i^t) / \\Sigma _{j=1}^m \\mathrm {exp}(s_j^t) \\nonumber \\\\\nc^Q_t &= \\Sigma _{i=1}^m a_i^t u_i^Q$$ (Eq. 13) ", " BIBREF19 introduce match-LSTM, which takes $u_j^P$ as an additional input into the recurrent network. BIBREF10 propose adding gate to the input ( $[u_t^P, c^Q_t]$ ) of RNN to determine the importance of passage parts. ", "$$&g_t = \\mathrm {sigmoid}(W_g [u_t^P, c^Q_t]) \\nonumber \\\\\n&[u_t^P, c^Q_t]^* = g_t\\odot [u_t^P, c^Q_t] \\nonumber \\\\\n&v_t^P = \\mathrm {GRU} (v_{t-1}^P, [u_t^P, c^Q_t]^*)$$ (Eq. 14) ", "We use pointer networks BIBREF3 to predict the position of evidence snippets. Following the previous work BIBREF7 , we concatenate all passages to predict one span for the evidence snippet prediction. Given the representation $\\lbrace v_t^P\\rbrace _{t=1}^N$ where $N$ is the sum of the length of all passages, the attention mechanism is utilized as a pointer to select the start position ( $p^1$ ) and end position ( $p^2$ ), which can be formulated as follows: ", "$$s_j^t &= \\mathrm {v}^\\mathrm {T}\\mathrm {tanh}(W_h^{P} v_j^P + W_{h}^{a} h_{t-1}^a) \\nonumber \\\\\na_i^t &= \\mathrm {exp}(s_i^t) / \\Sigma _{j=1}^N \\mathrm {exp}(s_j^t) \\nonumber \\\\\np^t &= \\mathrm {argmax}(a_1^t, \\dots , a_N^t)$$ (Eq. 16) ", " Here $h_{t-1}^a$ represents the last hidden state of the answer recurrent network (pointer network). The input of the answer recurrent network is the attention-pooling vector based on current predicted probability $a^t$ : ", "$$c_t &= \\Sigma _{i=1}^N a_i^t v_i^P \\nonumber \\\\\nh_t^a &= \\mathrm {GRU}(h_{t-1}^a, c_t)$$ (Eq. 17) ", " When predicting the start position, $h_{t-1}^a$ represents the initial hidden state of the answer recurrent network. We utilize the question vector $r^Q$ as the initial state of the answer recurrent network. $r^Q = att(u^Q, v^Q_r)$ is an attention-pooling vector of the question based on the parameter $v^Q_r$ : ", "$$s_j &= \\mathrm {v}^\\mathrm {T}\\mathrm {tanh}(W_u^{Q} u_j^Q + W_{v}^{Q} v_r^Q) \\nonumber \\\\\na_i &= \\mathrm {exp}(s_i) / \\Sigma _{j=1}^m \\mathrm {exp}(s_j) \\nonumber \\\\\nr^Q &= \\Sigma _{i=1}^m a_i u_i^Q$$ (Eq. 18) ", "For this part, the objective function is to minimize the following cross entropy: ", "$$\\mathcal {L}_{AP} = -\\Sigma _{t=1}^{2}\\Sigma _{i=1}^{N}[y^t_i\\log a^t_i + (1-y^t_i)\\log (1-a^t_i)]$$ (Eq. 19) ", "where $y^t_i \\in \\lbrace 0,1\\rbrace $ denotes a label. $y^t_i=1$ means $i$ is a correct position, otherwise $y^t_i=0$ .", "In this part, we match the question and each passage from word level to passage level. Firstly, we use the question representation $r^Q$ to attend words in each passage to obtain the passage representation $r^P$ where $r^P = att(v^P, r^Q)$ . ", "$$s_j &= \\mathrm {v}^\\mathrm {T}\\mathrm {tanh}(W_v^{P} v_j^P + W_{v}^{Q} r^Q) \\nonumber \\\\\na_i &= \\mathrm {exp}(s_i) / \\Sigma _{j=1}^n \\mathrm {exp}(s_j) \\nonumber \\\\\nr^P &= \\Sigma _{i=1}^n a_i v_i^P$$ (Eq. 21) ", " Next, the question representation $r^Q$ and the passage representation $r^P$ are combined to pass two fully connected layers for a matching score, ", "$$g = v_g^{\\mathrm {T}}(\\mathrm {tanh}(W_g[r^Q,r^P]))$$ (Eq. 22) ", "For one question, each candidate passage $P_i$ has a matching score $g_i$ . We normalize their scores and optimize following objective function: ", "$$\\hat{g}_i = \\mathrm {exp}(g_i) / \\Sigma _{j=1}^k \\mathrm {exp}(g_j) \\nonumber \\\\\n\\mathcal {L}_{PR} = -\\sum _{i=1}^{k}[y_i\\log \\hat{g}_i + (1-y_i)\\log (1-\\hat{g}_i)]$$ (Eq. 23) ", "where $k$ is the number of passages. $y_i \\in \\lbrace 0,1\\rbrace $ denotes a label. $y_i=1$ means $P_i$ is the correct passage, otherwise $y_i=0$ .", "The evident extraction part is trained by minimizing joint objective functions: ", "$$\\mathcal {L}_{E} = r \\mathcal {L}_{AP} + (1-r) \\mathcal {L}_{PR}$$ (Eq. 25) ", "where $r$ is the hyper-parameter for weights of two loss functions." ], [ "As shown in Figure 3 , we use the sequence-to-sequence model to synthesize the answer with the extracted evidences as features. We first produce the representation $h_{t}^P$ and $h_{t}^Q$ of all words in the passage and question respectively. When producing the answer representation, we combine the basic word embedding $e_t^p$ with additional features $f_t^s$ and $f_t^e$ to indicate the start and end positions of the evidence snippet respectively predicted by evidence extraction model. $f_t^s =1$ and $f_t^e =1$ mean the position $t$ is the start and end of the evidence span, respectively. ", "$$&h_{t}^P =\\mathrm {BiGRU}(h_{t-1}^P, [e_t^p,f_t^s,f_t^e]) \\nonumber \\\\\n&h_{t}^Q = \\mathrm {BiGRU}(h_{t-1}^Q,e_t^Q)$$ (Eq. 27) ", "On top of the encoder, we use GRU with attention as the decoder to produce the answer. At each decoding time step $t$ , the GRU reads the previous word embedding $ w_{t-1} $ and previous context vector $ c_{t-1} $ as inputs to compute the new hidden state $ d_{t} $ . To initialize the GRU hidden state, we use a linear layer with the last backward encoder hidden state $ \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{h}}}_{1}^P $ and $ \\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{h}}}_{1}^Q $ as input: ", "$$d_{t} &= \\text{GRU}(w_{t-1}, c_{t-1}, d_{t-1}) \\nonumber \\\\\nd_{0} &= \\tanh (W_{d}[\\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{h}}}_{1}^P,\\scalebox {-1}[1]{\\vec{\\scalebox {-1}[1]{h}}}_{1}^Q] + b)$$ (Eq. 28) ", " where $ W_{d} $ is the weight matrix and $ b $ is the bias vector.", "The context vector $ c_{t} $ for current time step $ t $ is computed through the concatenate attention mechanism BIBREF14 , which matches the current decoder state $ d_{t} $ with each encoder hidden state $ h_{t} $ to get the weighted sum representation. Here $h_{i}$ consists of the passage representation $h_{t}^P$ and the question representation $h_{t}^Q$ . ", "$$s^t_j &= v_{a}^{\\mathrm {T}}\\tanh (W_{a}d_{t-1} + U_{a}h_{j}) \\nonumber \\\\\na^t_i &= \\mathrm {exp}(s_i^t) / \\Sigma _{j=1}^n \\mathrm {exp}(s_j^t) \\nonumber \\\\\nc_{t} &= \\Sigma _{i = 1}^{n} a^t_ih_{i}$$ (Eq. 30) ", " We then combine the previous word embedding $ w_{t-1} $ , the current context vector $ c_{t} $ , and the decoder state $ d_{t} $ to construct the readout state $ r_{t} $ . The readout state is then passed through a maxout hidden layer BIBREF20 to predict the next word with a softmax layer over the decoder vocabulary. ", "$$r_{t} &= W_{r}w_{t-1} + U_{r}c_{t} + V_{r}d_{t} \\nonumber \\\\\nm_{t} &= [\\max \\lbrace r_{t, 2j-1}, r_{t, 2j}\\rbrace ]^{\\mathrm {T}} \\nonumber \\\\\np(y_{t} &\\vert y_{1}, \\dots , y_{t-1}) = \\text{softmax}(W_{o}m_{t})$$ (Eq. 31) ", " where $ W_{a} $ , $ U_{a} $ , $ W_{r} $ , $ U_{r} $ , $ V_{r} $ and $ W_{o} $ are parameters to be learned. Readout state $ r_{t} $ is a $ 2d $ -dimensional vector, and the maxout layer (Equation 31 ) picks the max value for every two numbers in $ r_{t} $ and produces a d-dimensional vector $ m_{t} $ .", "Our goal is to maximize the output probability given the input sentence. Therefore, we optimize the negative log-likelihood loss function: ", "$$\\mathcal {L}_{S}= - \\frac{1}{\\vert \\mathcal {D} \\vert } \\Sigma _{(X, Y) \\in \\mathcal {D}} \\log p(Y|X)$$ (Eq. 32) ", "where $\\mathcal {D}$ is the set of data. $X$ represents the question and passage including evidence snippets, and $Y$ represents the answer." ], [ "We conduct our experiments on the MS-MARCO dataset BIBREF1 . We compare our extraction-then-synthesis framework with pure extraction model and other baseline methods on the leaderboard of MS-MARCO. Experimental results show that our model achieves better results in official evaluation metrics. We also conduct ablation tests to verify our method, and compare our framework with the end-to-end generation framework." ], [ "For the MS-MARCO dataset, the questions are user queries issued to the Bing search engine and the context passages are from real web documents. The data has been split into a training set (82,326 pairs), a development set (10,047 pairs) and a test set (9,650 pairs).", "The answers are human-generated and not necessarily sub-spans of the passages so that the metrics in the official tool of MS-MARCO evaluation are BLEU BIBREF21 and ROUGE-L BIBREF22 . In the official evaluation tool, the ROUGE-L is calculated by averaging the score per question, however, the BLEU is normalized with all questions. We hold that the answer should be evaluated case-by-case in the reading comprehension task. Therefore, we mainly focus on the result in the ROUGE-L." ], [ "The evidence extraction and the answer synthesis are trained in two stages.", "For evidence extraction, since the answers are not necessarily sub-spans of the passages, we choose the span with the highest ROUGE-L score with the reference answer as the gold span in the training. Moreover, we only use the data whose ROUGE-L score of chosen text span is higher than 0.7, therefore we only use 71,417 training pairs in our experiments.", "For answer synthesis, the training data consists of two parts. First, for all passages in the training data, we choose the best span with highest ROUGE-L score as the evidence, and use the corresponding reference answer as the output. We only use the data whose ROUGE-L score of chosen evidence snippet is higher than 0.5. Second, we apply our evidence extraction model to all training data to obtain the extracted span. Then we treat the passage to which this span belongs as the input.", "For answer extraction, we use 300-dimensional uncased pre-trained GloVe embeddings BIBREF23 for both question and passage without update during training. We use zero vectors to represent all out-of-vocabulary words. Hidden vector length is set to 150 for all layers. We also apply dropout BIBREF24 between layers, with dropout rate 0.1. The weight $r$ is set to 0.8.", "For answer synthesis, we use an identical vocabulary set for the input and output collected from the training data. We set the vocabulary size to 30,000 according to the frequency and the other words are set to $<$ unk $>$ . All word embeddings are updated during the training. We set the word embedding size to 300, set the feature embedding size of start and end positions of the extracted snippet to 50, and set all GRU hidden state sizes to 150.", "The model is optimized using AdaDelta BIBREF25 with initial learning rate of 1.0. All hyper-parameters are selected on the MS-MARCO development set.", "When decoding, we first run our extraction model to obtain the extracted span, and run our synthesis model with the extracted result and the passage that contains this span. We use the beam search with beam size of 12 to generate the sequence. After the sequence-to-sequence model, we post-process the sequence with following rules:", "We only keep once if the sequence-to-sequence model generates duplicated words or phrases.", "For all “ $<$ unk $>$ ” and the word as well as phrase which are not existed in the extracted answer, we try to refine it by finding a word or phrase with the same adjacent words in the extracted span and passage.", "If the generated answer only contains a single word “ $<$ unk $>$ ”, we use the extracted span as the final answer." ], [ "We conduct experiments with following settings:", "S-Net (Extraction): the model that only has the evidence extraction part.", "S-Net: the model that consists of the evidence extraction part and the answer synthesis part.", "We implement two state-of-the-art baselines on reading comprehension, namely BiDAF BIBREF9 and Prediction BIBREF7 , to extract text spans as evidence snippets. Moreover, we implement a baseline that only has the evidence extraction part without the passage ranking. Then we apply the answer synthesis part on top of their results. We also compare with other methods on the MS-MARCO leaderboard, including FastQAExt BIBREF26 , ReasoNet BIBREF27 , and R-Net BIBREF10 ." ], [ "Table 2 shows the results on the MS-MARCO test data. Our extraction model achieves 41.45 and 44.08 in terms of ROUGE-L and BLEU-1, respectively. Next we train the model 30 times with the same setting, and select models using a greedy search. We sum the probability at each position of each single model to decide the ensemble result. Finally we select 13 models for ensemble, which achieves 42.92 and 44.97 in terms of ROUGE-L and BLEU-1, respectively, which achieves the state-of-the-art results of the extraction model. Then we test our synthesis model based on the extracted evidence. Our synthesis model achieves 3.78% and 3.73% improvement on the single model and ensemble model in terms of ROUGE-L, respectively. Our best result achieves 46.65 in terms of ROUGE-L and 44.78 in terms of BLEU-1, which outperforms all existing methods with a large margin and are very close to human performance. Moreover, we observe that our method only achieves significant improvement in terms of ROUGE-L compared with our baseline. The reason is that our synthesis model works better when the answer is short, which almost has no effect on BLEU as it is normalized with all questions.", "Since answers on the test set are not published, we analyze our model on the development set. Table 3 shows results on the development set in terms of ROUGE-L. As we can see, our method outperforms the baseline and several strong state-of-the-art systems. For the evidence extraction part, our proposed multi-task learning framework achieves 42.23 and 44.11 for the single and ensemble model in terms of ROUGE-L. For the answer synthesis, the single and ensemble models improve 3.72% and 3.65% respectively in terms of ROUGE-L. We observe the consistent improvement when applying our answer synthesis model to other answer span prediction models, such as BiDAF and Prediction." ], [ "We analyze the result of incorporating passage ranking as an additional task. We compare our multi-task framework with two baselines as shown in Table 4 . For passage selection, our multi-task model achieves the accuracy of 38.9, which outperforms the pure answer prediction model with 4.3. Moreover, jointly learning the answer prediction part and the passage ranking part is better than solving this task by two separated steps because the answer span can provide more information with stronger supervision, which benefits the passage ranking part. The ROUGE-L is calculated by the best answer span in the selected passage, which shows our multi-task learning framework has more potential for better answer.", "We compare the result of answer extraction and answer synthesis in different categories grouped by the upper bound of extraction method in Table 5 . For the question whose answer can be exactly matched in the passage, our answer synthesis model performs slightly worse because the sequence-to-sequence model makes some deviation when copying extracted evidences. In other categories, our synthesis model achieves more or less improvement. For the question whose answer can be almost found in the passage (ROUGE-L $\\ge $ 0.8), our model achieves 0.2 improvement even though the space that can be raised is limited. For the question whose upper performance via answer extraction is between 0.6 and 0.8, our model achieves a large improvement of 2.0. Part of questions in the last category (ROUGE-L $<$ 0.2) are the polar questions whose answers are “yes” or “no”. Although the answer is not in the passage or question, our synthesis model can easily solve this problem and determine the correct answer through the extracted evidences, which leads to such improvement in this category. However, in these questions, answers are too short to influence the final score in terms of BLEU because it is normalized in all questions. Moreover, the score decreases due to the penalty of length. Due to the limitation of BLEU, we only report the result in terms of ROUGE-L in our analysis.", "We compare our extraction-then-synthesis model with several end-to-end generation models in Table 6 . S2S represents the sequence-to-sequence framework shown in Figure 3 . The difference among our synthesis model and all entries in the Table 6 is the information we use in the encoding part. The authors of MS-MACRO publish a baseline of training a sequence-to-sequence model with the question and answer, which only achieves 8.9 in terms of ROUGE-L. Adding all passages to the sequence-to-sequence model can obviously improve the result to 28.75. Then we only use the question and the selected passage to generate the answer. The only difference with our synthesis model is that we add the position features to the basic sequence-to-sequence model. The result is still worse than our synthesis model with a large margin, which shows the matching between question and passage is very important for generating answer. Next, we build an end-to-end framework combining matching and generation. We apply the sequence-to-sequence model on top of the matching information by taking question sensitive passage representation $v^P_t$ in the Equation 14 as the input of sequence-to-sequence model, which only achieves 6.28 in terms of ROUGE-L. Above results show the effectiveness of our model that solves this task with two steps. In the future, we hope the reinforcement learning can help the connection between evidence extraction and answer synthesis." ], [ "In this paper, we propose S-Net, an extraction-then-synthesis framework, for machine reading comprehension. The extraction model aims to match the question and passage and predict most important sub-spans in the passage related to the question as evidence. Then, the synthesis model synthesizes the question information and the evidence snippet to generate the final answer. We propose a multi-task learning framework to improve the evidence extraction model by passage ranking to extract the evidence snippet, and use the sequence-to-sequence model for answer synthesis. We conduct experiments on the MS-MARCO dataset. Results demonstrate that our approach outperforms pure answer extraction model and other existing methods.", "We only annotate one evidence snippet in the sequence-to-sequence model for synthesizing answer, which cannot solve the question whose answer comes from multiple evidences, such as the second example in Table 1 . Our extraction model is based on the pointer network which selects the evidence by predicting the start and end positions of the text span. Therefore the top candidates are similar as they usually share the same start or end positions. By ranking separated candidates for predicting evidence snippets, we can annotate multiple evidence snippets as features in the sequence-to-sequence model for questions in this category in the future." ], [ "We thank the MS-MARCO organizers for help in submissions." ] ], "section_name": [ "Introduction", "Related Work", "Our Approach", "Gated Recurrent Unit", "Evidence Extraction", "Answer Synthesis", "Experiment", "Dataset and Evaluation Metrics", "Implementation Details", "Baseline Methods", "Result", "Discussion", "Conclusion and Future Work", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "7435bfa311d0cdff749d83b5936b0217ae68141e" ], "answer": [ { "evidence": [ "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers." ], "extractive_spans": [], "free_form_answer": "evidence extraction and answer synthesis", "highlighted_evidence": [ "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "2925ab34588a7601f8400eecee231c6a0d8a0115" ], "answer": [ { "evidence": [ "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers." ], "extractive_spans": [ " extraction-then-synthesis framework" ], "free_form_answer": "", "highlighted_evidence": [ "In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "f7ca9f602d61413af283408652667a1eecde0f27" ], "answer": [ { "evidence": [ "We propose a multi-task learning framework for evidence extraction. Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. To this end, we propose improving text span prediction with passage ranking. Specifically, as shown in Figure 2 , in addition to predicting a text span, we apply another task to rank candidate passages with the passage-level representation." ], "extractive_spans": [ "there are several related passages for each question in the MS-MARCO dataset.", "MS-MARCO also annotates which passage is correct" ], "free_form_answer": "", "highlighted_evidence": [ "Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat" ], "question": [ "What two components are included in their proposed framework?", "Which framework they propose in this paper?", "Why MS-MARCO is different from SQuAD?" ], "question_id": [ "c348a8c06e20d5dee07443e962b763073f490079", "0300cf768996849cab7463d929afcb0b09c9cf2a", "dd8f72cb3c0961b5ca1413697a00529ba60571fe" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "Machine Reading", "Machine Reading", "Machine Reading" ], "topic_background": [ "research", "research", "research" ] }
{ "caption": [ "Figure 1: Overview of S-Net. It first extracts evidence snippets by matching the question and passage, and then generates the answer by synthesizing the question, passage, and evidence snippets.", "Figure 2: Evidence Extraction Model", "Figure 3: Answer Synthesis Model", "Table 2: The performance on the MS-MARCO test set. *Using the ensemble result of extraction models as the input of the synthesis model.", "Table 3: The performance on the MS-MARCO development set in terms of ROUGE-L. *Using the ensemble result of extraction models as the input of the synthesis model. +Wang & Jiang (2016b) report their Prediction with 37.3.", "Table 4: Results of passage ranking. -w/o Passage Ranking: the model that only has evidence extraction part, without passage ranking part. -Passage Ranking then Extraction: the model that selects the passage firstly and then apply the extraction model only on the selected passage.", "Table 5: The performance of questions in different levels of necessary of synthesis in terms of ROUGE-L on MS-MARCO development set.", "Table 6: The performance on MS-MARCO development set of end-to-end methods." ], "file": [ "2-Figure1-1.png", "5-Figure2-1.png", "7-Figure3-1.png", "9-Table2-1.png", "9-Table3-1.png", "10-Table4-1.png", "10-Table5-1.png", "11-Table6-1.png" ] }
[ "What two components are included in their proposed framework?" ]
[ [ "1706.04815-Introduction-3" ] ]
[ "evidence extraction and answer synthesis" ]
387
1903.07398
Deep Text-to-Speech System with Seq2Seq Model
Recent trends in neural network based text-to-speech/speech synthesis pipelines have employed recurrent Seq2seq architectures that can synthesize realistic sounding speech directly from text characters. These systems however have complex architectures and takes a substantial amount of time to train. We introduce several modifications to these Seq2seq architectures that allow for faster training time, and also allows us to reduce the complexity of the model architecture at the same time. We show that our proposed model can achieve attention alignment much faster than previous architectures and that good audio quality can be achieved with a model that's much smaller in size. Sample audio available at https://soundcloud.com/gary-wang-23/sets/tts-samples-for-cmpt-419.
{ "paragraphs": [ [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately.", "The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or \"decompress\" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences.", "Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions:" ], [ "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 .", "A parrallel work exploring Seq2Seq RNN architecture for text-to-speech was called Char2Wav BIBREF3 . This work utilized a very similar RNN-based Seq2Seq architecture, albeit without any prenet modules. The attention mechanism is guassian mixture model (GMM) attention from Alex Grave's work. Their model mapped text sequence to 80 dimension vectors used for the WORLD Vocoder BIBREF9 , which invert these vectors into audio wave.", "More recently, a fully convolutional Seq2Seq architecture was investigated by Baidu Research BIBREF2 BIBREF10 . The deepvoice architecture is composed of causal 1-D convolution layers for both encoder and decoder. They utilized query-key attention similar to that from the transformer architecure BIBREF5 .", "Another fully convolutional Seq2Seq architecture known as DCTTS was proposed BIBREF6 . In this architecture they employ modules composed of Causal 1-D convolution layers combined with Highway networks. In addition they introduced methods for help guide attention alignments early. As well as a forced incremental attention mechanism that ensures monotonic increasing of attention read as the model decodes during inference." ], [ "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "Figure FIGREF3 below shows the overall architecture of our model." ], [ "The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 ", "where INLINEFORM0 ." ], [ "Query key attention is similar to that from transformers BIBREF5 . Given INLINEFORM0 and INLINEFORM1 from the encoder, the query, INLINEFORM2 , is computed from a linear transform of the concatenation of previous decoder-rnn hidden state, INLINEFORM3 , combined with attention-rnn hidden state, INLINEFORM4 ). DISPLAYFORM0 ", "Given INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the attention at each decoding step is computed by the scaled dot-product operation as: DISPLAYFORM0 ", "Note that similar to transformers BIBREF5 , we apply a scale the dot-product by INLINEFORM0 to prevent softmax function into regions where it has extremely small gradients." ], [ "The decoder is an autoregressive recurrent neural network that predicts mel spectrogram from the encoded input sentence one frame at a time.", "The decoder decodes the hidden representation from the encoder, with the guidance of attention. The decoder is composed of two uni-directional LSTM/GRU with INLINEFORM0 hidden dimensions. The first LSTM/GRU, called the AttentionRNN, is for computing attention-mechanism related items such as the attention query INLINEFORM1 . DISPLAYFORM0 ", "The second LSTM/GRU, DecoderRNN, is used to compute the decoder hidden output, INLINEFORM0 . DISPLAYFORM0 ", "A 2-layer dense prenet of dimensions (256,256) projects the previous mel spectrogram output INLINEFORM0 into hidden dimension INLINEFORM1 . Similar to Tacotron 2, the prenet acts as an information bottleneck to help produce useful representation for the downstream attention mechanism. Our model differs from Tacotron 2 in that we jointly project 5 consequetive mel frames at once into our hidden representation, which is faster and unlike Tacotron 2 which project 1 mel frame at at time.", "The DecoderRNN's hidden state INLINEFORM0 is also projected to mel spectrogram INLINEFORM1 . A residual post-net composed of 2 dense layer followed by a tanh activation function also projects the same decoder hidden state INLINEFORM2 to mel spectrogram INLINEFORM3 , which is added to the linear projected mel INLINEFORM4 to produce the final mel spectrogram INLINEFORM5 . DISPLAYFORM0 ", "A linear spectrogram INLINEFORM0 is also computed from a linear projection of the decoder hidden state INLINEFORM1 . This acts as an additional condition on the decoder hidden input. DISPLAYFORM0 ", "A single scalar stop token is computed from a linear projection of the decoder hidden state INLINEFORM0 to a scalar, followed by INLINEFORM1 , or sigmoid function. This stop token allows the model to learn when to stop decoding during inference. During inference, if stop token output is INLINEFORM2 , we stop decoding. DISPLAYFORM0 " ], [ "Total loss on the model is computed as the sum of 3 component losses: 1. Mean-Squared-Error(MSE) of predicted and ground-truth mel spectrogram 2. MSE of Linear Spectrogram 3. Binary Cross Entropy Loss of our stop token. Adam optimizer is used to optimize the model with learning rate of INLINEFORM0 .", "Model is trained via teacher forcing, where the ground-truth mel spectrogram is supplied at every decoding step instead of the model's own predicted mel spectrogram. To ensure the model can learn for long term sequences, teacher forcing ratio is annealed from 1.0 (full teacher forcing) to 0.2 (20 percent teacher forcing) over 300 epochs." ], [ "Our proposed improvements come from the observation that employing generic Seq2seq models for TTS application misses out on further optimization that can be achieved when we consider the specific problem of TTS. Specifically, we notice that in TTS, unlike in applications like machine translation, the Seq2Seq attention mechanism should be mostly monotonic. In other words, when one reads a sequence of text, it is natural to assume that the text position progress nearly linearly in time with the sequence of output mel spectrogram. With this insight, we can make 3 modifications to the model that allows us to train faster while using a a smaller model." ], [ "In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention.", "We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention." ], [ "Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible.", "As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 ", "Where INLINEFORM0 , INLINEFORM1 is the INLINEFORM2 -th character, INLINEFORM3 is the max character length, INLINEFORM4 is the INLINEFORM5 -th mel frame, INLINEFORM6 is the max mel frame, and INLINEFORM7 is set at 0.2. This modification dramatically speed up the attention alignment and model convergence.", "Figure 3 below shows the results visually. The two images are side by side comparison of the model's attention after 10k training steps. The image on the left is trained with the atention mask, and the image on the right is not. We can see that with the attention mask, clear attention alignment is achieved much faster." ], [ "During inference, the attention INLINEFORM0 occasionally skips multiple charaters or stall on the same character for multiple output frames. To make generation more robust, we modify INLINEFORM1 during inference to force it to be diagonal.", "The Forced incremental attention is implemented as follows:", "Given INLINEFORM0 , the position of character read at INLINEFORM1 -th time frame, where INLINEFORM2 , if INLINEFORM3 , the current attention is forcibly set to INLINEFORM4 , so that attention is incremental, i.e INLINEFORM5 ." ], [ "The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio.", "One thing to note that since this is open-source audio recorded in a semi-professional setting, the audio quality is not as good as that of proprietary internal data from Google or Baidu. As most things with deep learning, the better the data, the better the model and results." ], [ "Our model was trained for 300 epochs, with batch size of 32. We used pre-trained opensource implementation of Tactron 2 (https://github.com/NVIDIA/tacotron2) as baseline comparison. Note this open-source version is trained for much longer (around 1000 epochs) however due to our limited compute we only trained our model up to 300 epochs" ], [ "We decide to evaluate our model against previous baselines on two fronts, Mean Opnion Score (MOS) and training speed.", "Typical TTS system evaluation is done with mean opinion score (MOS). To compute this score, many samples of a TTS system is given to human evaluators and rated on a score from 1 (Bad) to 5 (Excellent). the MOS is then computed as the arithmetic mean of these score: DISPLAYFORM0 ", "Where INLINEFORM0 are individual ratings for a given sample by N subjects.", "For TTS models from google and Baidu, they utilized Amazon mechanical Turk to collect and generate MOS score from larger number of workers. However due to our limited resources, we chose to collect MOS score from friends and families (total 6 people).", "For training time comparison, we choose the training time as when attention alignment start to become linear and clear. After digging through the git issues in the Tacotron 2 open-source implementation, we found a few posts where users posted their training curve and attention alignment during training (they also used the default batch size of 32). We used their training steps to roughly estimate the training time of Tacotron 2 when attention roughly aligns. For all other models the training time is not comparable as they either don't apply (e.g parametric model) or are not reported (Tacotron griffin lim, Deepvoice 3).", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality." ], [ "We introduce a new architecture for end-to-end neural text-to-speech system. Our model relies on RNN-based Seq2seq architecture with a query-key attention. We introduce novel guided attention mask to improve model training speed, and at the same time is able to reduce model parameters. This allows our model to achieve attention alignment at least 3 times faster than previous RNN-based Seq2seq models such as Tacotron 2. We also introduce forced incremental attention during synthesis to prevent attention alignment mistakes and allow model to generate coherent speech for very long sentences." ] ], "section_name": [ "Introduction", "Related Work", "Model Overview", "Text Encoder", "Query-Key Attention", "Decoder", "Training and Loss", "Proposed Improvements", "Changes to Attention Mechanism", "Guided Attention Mask", "Forced Incremental Attention", "Experiment Dataset", "Experiment Procedure", "Evaluation Metrics", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "4c53d0802b646168c6b7dc17242d7d69944205b9" ], "answer": [ { "evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 ." ], "extractive_spans": [ "LJSpeech" ], "free_form_answer": "", "highlighted_evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models", "The open source LJSpeech Dataset was used to train our TTS model.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "29b73558d708bddc45f1886e6bc1bfe1c627d10f" ], "answer": [ { "evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 .", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention.", "We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention.", "Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible.", "As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0" ], "extractive_spans": [], "free_form_answer": "Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible", "highlighted_evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models.", "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4", "In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention.\n\nWe propose to replace this attention with the simpler query-key attention from transformer model", "Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible.", "As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "9c58c47ea139b783804bcb5f4694b107a2d2632c" ], "answer": [ { "evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 .", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality." ], "extractive_spans": [ "Direct comparison of model parameters" ], "free_form_answer": "", "highlighted_evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models.", "Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . ", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "eaf4f2aa99412d4155f163e2771d716a4376b547" ], "answer": [ { "evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models.", "The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 ", "Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Which dataset(s) do they evaluate on?", "Which modifications do they make to well-established Seq2seq architectures?", "How do they measure the size of models?", "Do they reduce the number of parameters in their architecture compared to other direct text-to-speech models?" ], "question_id": [ "c8f11561fc4da90bcdd72f76414421e1527c0287", "51de39c8bad62d3cbfbec1deb74bd8a3ac5e69a8", "d9cbcaf8f0457b4be59178446f1a280d17a923fa", "fc69f5d9464cdba6db43a525cecde2bf6ddaaa57" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Overall architecture of our Seq2Seq model for neural text-to-speech. Note that inputs, encoder, decoder and attention are labelled different colors.", "Figure 2: Attention guide mask. Note that bright area has larger values and dark area has small values.", "Figure 3: Attention alignment plots for two identifical models trained with and without guided attention masks. Both models have been trained for 10k in this figure.", "Table 1: Result table comparing MOS score and rough estimated training time between different TTS systems." ], "file": [ "3-Figure1-1.png", "5-Figure2-1.png", "5-Figure3-1.png", "7-Table1-1.png" ] }
[ "Which modifications do they make to well-established Seq2seq architectures?" ]
[ [ "1903.07398-Related Work-0", "1903.07398-Introduction-0", "1903.07398-Guided Attention Mask-0", "1903.07398-Changes to Attention Mechanism-1", "1903.07398-Model Overview-0", "1903.07398-Changes to Attention Mechanism-0" ] ]
[ "Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible" ]
390
1710.06700
Build Fast and Accurate Lemmatization for Arabic
In this paper we describe the complexity of building a lemmatizer for Arabic which has a rich and complex derivational morphology, and we discuss the need for a fast and accurate lammatization to enhance Arabic Information Retrieval (IR) results. We also introduce a new data set that can be used to test lemmatization accuracy, and an efficient lemmatization algorithm that outperforms state-of-the-art Arabic lemmatization in terms of accuracy and speed. We share the data set and the code for public.
{ "paragraphs": [ [ "Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning.", "Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 .", "Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe).", "While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma.", "This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract." ], [ "Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”.", "IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users.", "Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems.", "A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots.", "Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma.", "In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments." ], [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.", "Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 .", "As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA." ], [ "We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation.", "From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 .", "The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”.", "While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc.", "It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language.", "The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html" ], [ "Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”).", "Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task.", "In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple." ], [ "Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities.", "In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems." ], [ "In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications." ] ], "section_name": [ "Introduction", "Background", "Data Description", "system Description", "Evaluation", "Error Analysis", "Discussion" ] }
{ "answers": [ { "annotation_id": [ "6f8264da377ead4925f48d0637888203343381c7" ], "answer": [ { "evidence": [ "In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple." ], "extractive_spans": [], "free_form_answer": "how long it takes the system to lemmatize a set number of words", "highlighted_evidence": [ "In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "29d8e794ef14a685d9ccba40c66a63baa7352bb0" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Lemmatization accuracy using WikiNews testset" ], "extractive_spans": [], "free_form_answer": "97.32%", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Lemmatization accuracy using WikiNews testset" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "cb5e9f34c283fb1a8638394e7c1060d04fe4ca7f" ], "answer": [ { "evidence": [ "Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma.", "As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA." ], "extractive_spans": [ " MADAMIRA BIBREF6 system" ], "free_form_answer": "", "highlighted_evidence": [ "Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma.", "As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "c298f88263f2a0d59eba884e9f6ce7b23a3044e8" ], "answer": [ { "evidence": [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.", "Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 ." ], "extractive_spans": [ "Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization" ], "free_form_answer": "", "highlighted_evidence": [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.\n\nWord are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "7470eb50839c2a754141ea9429ca4523ce9e64da" ], "answer": [ { "evidence": [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each." ], "extractive_spans": [ "Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each" ], "free_form_answer": "", "highlighted_evidence": [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "5c5b74534c190cf078c20a34e0b404737e52f886" ], "answer": [ { "evidence": [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each." ], "extractive_spans": [ "from Arabic WikiNews site https://ar.wikinews.org/wiki" ], "free_form_answer": "", "highlighted_evidence": [ "To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "" ], "question": [ "How was speed measured?", "What were their accuracy results on the task?", "What is the state of the art?", "How was the dataset annotated?", "What is the size of the dataset?", "Where did they collect their dataset from?" ], "question_id": [ "da845a2a930fd6a3267950bec5928205b6c6e8e8", "2fa0b9d0cb26e1be8eae7e782ada6820bc2c037f", "76ce9e02d97e2d77fe28c0fa78526809e7c195c6", "64c7545ce349265e0c97fd6c434a5f8efdc23777", "47822fec590e840438a3054b7f512fec09dbd1e1", "989271972b3176d0a5dabd1cc0e4bdb671269c96" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "" ] }
{ "caption": [ "Table 1: Examples of complex verb lemmatization cases", "Table 2: Examples of complex noun lemmatization cases", "Figure 2: Buckwalter analysis (diacritization forms and lemmas are highlighted)", "Figure 1: Lemmatization of WikiNews corpus", "Table 3: Lemmatization accuracy using WikiNews testset", "Figure 4: Lemmatization online demo (part of Farasa Arabic NLP tools)" ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Figure2-1.png", "3-Figure1-1.png", "4-Table3-1.png", "5-Figure4-1.png" ] }
[ "How was speed measured?", "What were their accuracy results on the task?" ]
[ [ "1710.06700-Evaluation-2" ], [ "1710.06700-4-Table3-1.png" ] ]
[ "how long it takes the system to lemmatize a set number of words", "97.32%" ]
392
1709.08299
Dataset for the First Evaluation on Chinese Machine Reading Comprehension
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention. However, existing reading comprehension datasets are mostly in English. To add diversity in reading comprehension datasets, in this paper we propose a new Chinese reading comprehension dataset for accelerating related research in the community. The proposed dataset contains two different types: cloze-style reading comprehension and user query reading comprehension, associated with large-scale training data as well as human-annotated validation and hidden test set. Along with this dataset, we also hosted the first Evaluation on Chinese Machine Reading Comprehension (CMRC-2017) and successfully attracted tens of participants, which suggest the potential impact of this dataset.
{ "paragraphs": [ [ "Machine Reading Comprehension (MRC) has become enormously popular in recent research, which aims to teach the machine to comprehend human languages and answer the questions based on the reading materials. Among various reading comprehension tasks, the cloze-style reaing comprehension is relatively easy to follow due to its simplicity in definition, which requires the model to fill an exact word into the query to form a coherent sentence according to the document material. Several cloze-style reading comprehension datasets are publicly available, such as CNN/Daily Mail BIBREF0 , Children's Book Test BIBREF1 , People Daily and Children's Fairy Tale BIBREF2 .", "In this paper, we provide a new Chinese reading comprehension dataset, which has the following features", "We also host the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC2017), which has attracted over 30 participants and finally there were 17 participants submitted their evaluation systems for testing their reading comprehension models on our newly developed dataset, suggesting its potential impact. We hope the release of the dataset to the public will accelerate the progress of Chinese research community on machine reading comprehension field.", "We also provide four official baselines for the evaluations, including two traditional baselines and two neural baselines. In this paper, we adopt two widely used neural reading comprehension model: AS Reader BIBREF3 and AoA Reader BIBREF4 .", "The rest of the paper will be organized as follows. In Section 2, we will introduce the related works on the reading comprehension dataset, and then the proposed dataset as well as our competitions will be illustrated in Section 3. The baseline and participant system results will be given in Section 4 and we will made a brief conclusion at the end of this paper." ], [ "In this section, we will introduce several public cloze-style reading comprehension dataset." ], [ "Some news articles often come along with a short summary or brief introduction. Inspired by this, Hermann et al. hermann-etal-2015 release the first cloze-style reading comprehension dataset, called CNN/Daily Mail. Firstly, they obtained large-scale CNN and Daily Mail news data from online websites, including main body and its summary. Then they regard the main body of the news as the Document. The Query is generated by replacing a name entity word from the summary by a placeholder, and the replaced named entity word becomes the Answer. Along with the techniques illustrated above, after the initial data generation, they also propose to anonymize all named entity tokens in the data to avoid the model exploit world knowledge of specific entities, increasing the difficulties in this dataset. However, as we have known that world knowledge is very important when we do reading comprehension in reality, which makes this dataset much artificial than real situation. Chen et al. chen-etal-2016 also showed that the proposed anonymization in CNN/Daily Mail dataset is less useful, and the current models BIBREF3 , BIBREF5 are nearly reaching ceiling performance with the automatically generated dataset which contains much errors, such as coreference errors, ambiguous questions etc." ], [ "Another popular cloze-style reading comprehension dataset is the Children's Book Test (CBT) proposed by Hill et al. hill-etal-2015 which was built from the children's book stories. Though the CBT dataset also use an automatic way for data generation, there are several differences to the CNN/Daily Mail dataset. They regard the first 20 consecutive sentences in a story as the Document and the following 21st sentence as the Query where one token is replaced by a placeholder to indicate the blank to fill in. Unlike the CNN/Daily Mail dataset, in CBT, the replaced word are chosen from various types: Name Entity (NE), Common Nouns (CN), Verbs (V) and Prepositions (P). The experimental results showed that, the verb and preposition answers are not sensitive to the changes of document, so the following works are mainly focusing on solving the NE and CN genres." ], [ "The previously mentioned datasets are all in English. To add diversities to the reading comprehension datasets, Cui et al. cui-etal-2016 proposed the first Chinese cloze-style reading comprehension dataset: People Daily & Children's Fairy Tale, including People Daily news datasets and Children's Fairy Tale datasets. They also generate the data in an automatic manner, which is similar to the previous datasets. They choose short articles (several hundreds of words) as Document and remove a word from it, whose type is mostly named entities and common nouns. Then the sentence that contains the removed word will be regarded as Query. To add difficulties to the dataset, along with the automatically generated evaluation sets (validation/test), they also release a human-annotated evaluation set. The experimental results show that the human-annotated evaluation set is significantly harder than the automatically generated questions. The reason would be that the automatically generated data is accordance with the training data which is also automatically generated and they share many similar characteristics, which is not the case when it comes to human-annotated data." ], [ "In this section, we will briefly introduce the evaluation tracks and then the generation method of our dataset will be illustrated in detail." ], [ "The proposed dataset is typically used for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017), which aims to provide a communication platform to the Chinese communities in the related fields. In this evaluation, we provide two tracks. We provide a shared training data for both tracks and separated evaluation data.", "Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloze evaluation track, where training and test set are exactly the same type.", "User Query Track: This track is designed for using transfer learning or domain adaptation to minimize the gap between cloze training data and user query evaluation data, i.e. training and testing is fairly different.", "Following Rajpurkar et al. rajpurkar-etal-2016, we preserve the test set only visible to ourselves and require the participants submit their system in order to provide a fair comparison among participants and avoid tuning performance on the test set. The examples of Cloze and User Query Track are given in Figure 1 ." ], [ "The cloze-style reading comprehension can be described as a triple $\\langle \\mathcal {D}, \\mathcal {Q}, \\mathcal {A} \\rangle $ , where $\\mathcal {D}$ represents Document, $\\mathcal {Q}$ represents Query and the $\\mathcal {A}$ represents Answer. There is a restriction that the answer should be a single word and should appear in the document, which was also adopted in BIBREF1 , BIBREF2 . In our dataset, we mainly focus on answering common nouns and named entities which require further comprehension of the document." ], [ "Following Cui et al. BIBREF2 , we also use similar way to generate our training data automatically. Firstly we roughly collected 20,000 passages from children's reading materials which were crawled in-house. Briefly, we choose an answer word in the document and treat the sentence containing answer word as the query, where the answer is replaced by a placeholder “XXXXX”. The detailed procedures can be illustrated as follows.", "Pre-processing: For each sentence in the document, we do word segmentation, POS tagging and dependency parsing using LTP toolkit BIBREF6 .", "Dependency Extraction: Extract following dependencies: COO, SBV, VOB, HED, FOB, IOB, POB, and only preserve the parts that have dependencies.", "Further Filtering: Only preserve SBV, VOB and restrict the related words not to be pronouns and verbs.", "Frequency Restriction: After calculating word frequencies, only word frequency that greater than 2 is valid for generating question.", "Question Restriction: Only five questions can be extracted within one passage." ], [ "Apart from the automatically generated large-scale training data, we also provide human-annotated validation and test data to improve the estimation quality. The annotation procedure costs one month with 5 annotators and each question is cross-validated by another annotator. The detailed procedure for each type of dataset can be illustrated as follows.", "For the validation and test set in cloze data, we first randomly choose 5,000 paragraphs each for automatically generating questions using the techniques mentioned above. Then we invite our resource team to manually select 2,000 questions based on the following rules.", "Whether the question is appropriate and correct", "Whether the question is hard for LMs to answer", "Only select one question for each paragraph", "Unlike the cloze dataset, we have no automatic question generation procedure in this type. In the user query dataset, we asked our annotator to directly raise questions according to the passage, which is much difficult and time-consuming than just selecting automatically generated questions. We also assign 5,000 paragraphs for question annotations in both validation and test data. Following rules are applied in asking questions.", "The paragraph should be read carefully and judged whether appropriate for asking questions", "No more than 5 questions for each passage", "The answer should be better in the type of nouns, named entities to be fully evaluated", "Too long or too short paragraphs should be skipped" ], [ "In this section, we will give several baseline systems for evaluating our datasets as well as presenting several top-ranked systems in the competition." ], [ "We set several baseline systems for testing basic performance of our datasets and provide meaningful comparisons to the participant systems. In this paper, we provide four baseline systems, including two simple ones and two neural network models. The details of the baseline systems are illustrated as follows.", "Random Guess: In this baseline, we randomly choose one word in the document as the answer.", "Top Frequency: We choose the most frequent word in the document as the answer.", "AS Reader: We implemented Attention Sum Reader (AS Reader) BIBREF3 for modeling document and query and predicting the answer with the Pointer Network BIBREF7 , which is a popular framework for cloze-style reading comprehension. Apart from setting embedding and hidden layer size as 256, we did not change other hyper-parameters and experimental setups as used in Kadlec et al. kadlec-etal-2016, nor we tuned the system for further improvements.", "AoA Reader: We also implemented Attention-over-Attention Reader (AoA Reader) BIBREF4 which is the state-of-the-art model for cloze-style reading comprehension. We follow hyper-parameter settings in AS Reader baseline without further tuning.", "In the User Query Track, as there is a gap between training and validation, we follow BIBREF8 and regard this task as domain adaptation or transfer learning problem. The neural baselines are built by the following steps.", "We first use the shared training data to build a general systems, and choose the best performing model (in terms of cloze validation set) as baseline.", "Use User Query validation data for further tuning the systems with 10-fold cross-validations.", "Increase dropout rate BIBREF9 to 0.5 for preventing over-fitting issue.", "All baseline systems are chosen according to the performance of the validation set." ], [ "The participant system results are given in Table 2 and 3 .", "As we can see that two neural baselines are competitive among participant systems and AoA Reader successfully outperform AS Reader and all participant systems in single model condition, which proves that it is a strong baseline system even without further fine-tuning procedure. Also, the best performing single model among participant systems failed to win in the ensemble condition, which suggest that choosing right ensemble method is essential in most of the competitions and should be carefully studied for further performance improvements.", "Not surprisingly, we only received three participant systems in User Query Track, as it is much difficult than Cloze Track. As shown in Table 3 , the test set performance is significantly lower than that of Cloze Track, due to the mismatch between training and test data. The baseline results give competitive performance among three participants, while failed to outperform the best single model by ECNU, which suggest that there is much room for tuning and using more complex methods for domain adaptation." ], [ "In this paper, we propose a new Chinese reading comprehension dataset for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017), consisting large-scale automatically generated training set and human-annotated validation and test set. Many participants have verified their algorithms on this dataset and tested on the hidden test set for final evaluation. The experimental results show that the neural baselines are tough to beat and there is still much room for using complicated transfer learning method to better solve the User Query Task. We hope the release of the full dataset (including hidden test set) could help the participants have a better knowledge of their systems and encourage more researchers to do experiments on." ], [ "We would like to thank the anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. We thank the Sixteenth China National Conference on Computational Linguistics (CCL 2017) and Nanjing Normal University for providing space for evaluation workshop. Also we want to thank our resource team for annotating and verifying evaluation data. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409." ] ], "section_name": [ "Introduction", "Related Works", "CNN/Daily Mail", "Children's Book Test", "People Daily & Children's Fairy Tale", "The Proposed Dataset", "The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017)", "Definition of Cloze Task", "Automatic Generation", "Human Annotation", "Experiments", "Baseline Systems", "Participant Systems", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "82716e753629b11f8be6efd94942b8685e0213f4" ], "answer": [ { "evidence": [ "Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloze evaluation track, where training and test set are exactly the same type.", "User Query Track: This track is designed for using transfer learning or domain adaptation to minimize the gap between cloze training data and user query evaluation data, i.e. training and testing is fairly different." ], "extractive_spans": [], "free_form_answer": "cloze-style reading comprehension and user query reading comprehension questions", "highlighted_evidence": [ "Cloze Track", "User Query Track" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] }, { "annotation_id": [ "2ab68fe13dc73cd98fbd30301b89c48f61d1988d" ], "answer": [ { "evidence": [ "The previously mentioned datasets are all in English. To add diversities to the reading comprehension datasets, Cui et al. cui-etal-2016 proposed the first Chinese cloze-style reading comprehension dataset: People Daily & Children's Fairy Tale, including People Daily news datasets and Children's Fairy Tale datasets. They also generate the data in an automatic manner, which is similar to the previous datasets. They choose short articles (several hundreds of words) as Document and remove a word from it, whose type is mostly named entities and common nouns. Then the sentence that contains the removed word will be regarded as Query. To add difficulties to the dataset, along with the automatically generated evaluation sets (validation/test), they also release a human-annotated evaluation set. The experimental results show that the human-annotated evaluation set is significantly harder than the automatically generated questions. The reason would be that the automatically generated data is accordance with the training data which is also automatically generated and they share many similar characteristics, which is not the case when it comes to human-annotated data." ], "extractive_spans": [], "free_form_answer": "English", "highlighted_evidence": [ "The previously mentioned datasets are all in English" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "somewhat", "somewhat" ], "question": [ "What two types the Chinese reading comprehension dataset consists of?", "For which languages most of the existing MRC datasets are created?" ], "question_id": [ "7a7e279170e7a2f3bc953c37ee393de8ea7bd82f", "e3981a11d3d6a8ab31e1b0aa2de96f253653cfb2" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "Machine Reading", "Machine Reading" ], "topic_background": [ "research", "research" ] }
{ "caption": [ "Table 1: Statistics of the dataset for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017).", "Figure 1: Examples of the proposed datasets (the English translation is in grey). The sentence ID is depicted at the beginning of each row. In the Cloze Track, “XXXXX” represents the missing word.", "Table 2: Results on Cloze Track. The best baseline and participant systems are depicted in bold face.", "Table 3: Results on User Query Track. Due to the using of validation data, we did not report its performance." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "4-Table2-1.png", "4-Table3-1.png" ] }
[ "What two types the Chinese reading comprehension dataset consists of?", "For which languages most of the existing MRC datasets are created?" ]
[ [ "1709.08299-The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017)-1", "1709.08299-The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017)-2" ], [ "1709.08299-People Daily & Children's Fairy Tale-0" ] ]
[ "cloze-style reading comprehension and user query reading comprehension questions", "English" ]
397
1909.08167
Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis
Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution.
{ "paragraphs": [ [ "Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T).", "In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant.", "However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\\rm {P}(\\rm {Y})$ shifts across domains. Specifically, let $\\rm {X}$ and $\\rm {Y}$ denote the input and label random variable, respectively, and $G(\\rm {X})$ denote the feature representation of $\\rm {X}$. We found out that when $\\rm {P}(\\rm {Y})$ changes across domains while $\\rm {P}(\\rm {X}|\\rm {Y})$ stays the same, forcing $G(\\rm {X})$ to be domain-invariant will make $G(\\rm {X})$ uninformative to $\\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\\rm {P}(\\rm {Y})$ and $\\rm {P}(\\rm {X}|\\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\\rm {P}(\\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\\rm {P}(\\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\\rm {P}_T(\\mathbf {Y})$ with that of source domain labeled data $\\rm {P}_S(\\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7.", "To address the problem of DIRL resulted from the shift of $\\rm {P}(\\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\\mathbf {w}$ to weigh source domain examples by class, hoping to make $\\rm {P}(\\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\\rm {P}(\\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\\rm {P}_S(\\rm {Y}|\\rm {X}; \\mathbf {\\Phi })$ and a class weight $\\mathbf {w}$. In the second step, it resolves the shift of $\\rm {P}(\\rm {Y}|\\rm {X})$ by adjusting $\\rm {P}_S(\\rm {Y}|\\rm {X}; \\mathbf {\\Phi })$ using $\\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively.", "In summary, the contributions of this paper include: ($\\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\\rm {P}(\\rm {Y})$ shifts across domains. ($\\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts." ], [ "For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\\rm {X} \\times \\rm {Y}$: the source domain $\\rm {P}_S(\\rm {X},\\rm {Y})$ and the target domain $\\rm {P}_T(\\rm {X},\\rm {Y})$. And there is a labeled data set $\\mathcal {D}_S$ drawn $i.i.d$ from $\\rm {P}_S(\\rm {X},\\rm {Y})$ and an unlabeled data set $\\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\\rm {P}_T(\\rm {X})$:", "The goal of domain adaptation is to build a classier $f:\\rm {X} \\rightarrow \\rm {Y}$ that has good performance in the target domain using $\\mathcal {D}_S$ and $\\mathcal {D}_T$.", "For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22." ], [ "Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25.", "Theorem 1 For a hypothesis $h$,", "Here, $\\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions.", "Based on Theorem UNKREF3 and assuming that performing feature transform on $\\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\\rm {X}$, hoping to obtain a feature representation $G(\\rm {X})$ that has a lower value of ${d}_{1}(\\rm {P}_S(G(\\rm {X})), \\rm {P}_T(G(\\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\\rm {X}_S$ and $\\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\\rm {P}_S$ and $\\rm {P}_T$, respectively. The CMD loss between $\\rm {P}_S$ and $\\rm {P}_T$ is defined by:", "Here, $\\mathbb {E}(\\rm {X})$ denotes the expectation of $\\rm {X}$ over distribution $\\rm {P}_S(\\rm {X})$, and", "is the $k$-th momentum, where $\\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\\rm {X}$.", "The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss:", "over its trainable parameters, while in contrast $G$ was trained to maximize $\\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\\text{JSD}(\\rm {P}_S, \\rm {P}_T)$ between $\\rm {P}_S(G(\\rm {X}))$ and $\\rm {P}_T(G(\\rm {X}))$ over $G$. Here, for a concise expression, we write $\\rm {P}$ as the shorthand for $\\rm {P}(G(\\rm {X}))$.", "The task loss is the combination of the supervised learning loss $\\mathcal {L}_{sup}$ and the domain-invariant learning loss $\\mathcal {L}_{inv}$, which are defined on $\\mathcal {D}_S$ only and on the combination of $\\mathcal {D}_S$ and $\\mathcal {D}_T$, respectively:", "Here, $\\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\\text{JSD}(\\rm {P}_S, \\rm {P}_T)$ and $\\text{CMD}_K$ are two concrete forms of $\\mathcal {L}_{inv}$." ], [ "In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\\rm {P}(\\rm {Y})$ shifts across domains. Specifically, when $\\rm {P}_S(\\rm {Y})$ differs from $\\rm {P}_T(\\rm {Y})$, forcing the feature representations $G(\\rm {X})$ to be domain-invariant may increase the value of $\\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\\rm {P}_S(\\rm {X}|\\rm {Y})=\\rm {P}_T(\\rm {X}|\\rm {Y})$. Then, we consider the more general condition that $\\rm {P}_S(\\rm {X}|\\rm {Y})$ also differs from $\\rm {P}_T(\\rm {X}|\\rm {Y})$.", "When $\\rm {P}_S(\\rm {X}|\\rm {Y})=\\rm {P}_T(\\rm {X}|\\rm {Y})$, we have the following theorem.", "Theorem 2 Given $\\rm {P}_S(\\rm {X}|\\rm {Y})=\\rm {P}_T(\\rm {X}|\\rm {Y})$, if $\\rm {P}_S(\\rm {Y}=i) \\ne \\rm {P}_T(\\rm {Y}=i)$ and a feature map $G$ makes $\\rm {P}_S \\left( \\mathcal {M}(\\rm {X}))=\\rm {P}_T(\\mathcal {M}(\\rm {X}) \\right)$, then $\\rm {P}_S(\\rm {Y}=i|\\mathcal {M}(\\rm {X}))=\\rm {P}_S(\\rm {Y}=i)$.", "Proofs appear in Appendix A." ], [ "According to Theorem UNKREF8, we know that when $\\rm {P}_S(\\rm {X}|\\rm {Y})=\\rm {P}_T(\\rm {X}|\\rm {Y})$ and $\\rm {P}_S(\\rm {Y}=i) \\ne \\rm {P}_T(\\rm {Y}=i)$, forcing $G(\\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\\rm {X})$. In this case, $\\rm {P}_S(G(\\rm {X})=g_0)= \\rm {P}_T(G(\\rm {X})=g_0) = 1$. Therefore, $G(\\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \\operatornamewithlimits{arg\\,max}_y \\rm {P}_S(\\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B.", "When $\\rm {P}_S(\\rm {Y})\\ne \\rm {P}_T(\\rm {Y})$ and $\\rm {P}_S(\\rm {X}|\\rm {Y}) \\ne \\rm {P}_T(\\rm {X}|\\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant.", "Suppose that $\\rm {P}_S(\\rm {Y}=i) \\ne \\rm {P}_T(\\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\\rm {X})$, i.e.,:", "In DIRL, we hope that:", "Consider the region $x \\in \\mathcal {X}_i$, where $\\rm {P}(G(\\rm {X}=x)|\\rm {Y}=i)>0$. According to the above assumption, we know that $\\rm {P}(G(\\rm {X}=x \\in \\mathcal {X}_i)|\\rm {Y} \\ne i) = 0$. Therefore, applying DIRL will force", "in region $x \\in \\mathcal {X}_i$. Taking the integral of $x$ over $\\mathcal {X}_i$ for both sides of the equation, we have $\\rm {P}_S(\\rm {Y}=i) = \\rm {P}_T(\\rm {Y}=i)$. This deduction contradicts with the setting that $\\rm {P}_S(\\rm {Y}=i) \\ne \\rm {P}_T(\\rm {Y}=i)$. Therefore, $G(\\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning.", "Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\\rm {P}(\\rm {Y})$ shifts across domains. However, the shift of $\\rm {P}(\\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL." ], [ "According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\\rm {P}(\\rm {Y})$ to DIRL. The key idea of this framework is to first align $\\rm {P}(\\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\\rm {P}(\\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\\rm {P}(\\rm {Y})$ during the alignment of $\\rm {P}(\\rm {X}|\\rm {Y})$. In the second step, it uses $\\mathbf {w}$ to reweigh the supervised classifier $\\rm {P}_S(\\rm {Y}|\\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively." ], [ "The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\\rm {P}(\\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$. Specifically, we hope that:", "and we denote $\\mathbf {w}^*$ the value of $\\mathbf {w}$ that makes this equation hold. We shall see that when $\\mathbf {w}=\\mathbf {w}^*$, DIRL is to align $\\rm {P}_S(G(\\rm {X})|\\rm {Y})$ with $\\rm {P}_T(G(\\rm {X})|\\rm {Y})$ without the shift of $\\rm {P}(\\rm {Y})$. According to our analysis, we know that due to the shift of $\\rm {P}(\\rm {Y})$, there is a conflict between the training objects of the supervised learning $\\mathcal {L}_{sup}$ and the domain-invariant learning $\\mathcal {L}_{inv}$. And the conflict degree will decrease as $\\rm {P}_S(\\rm {Y})$ getting close to $\\rm {P}_T(\\rm {Y})$. Therefore, during model training, $\\mathbf {w}$ is expected to be optimized toward $\\mathbf {w}^*$ since it will make $\\rm {P}(\\rm {Y})$ of the weighted source domain close to $\\rm {P}_T(\\rm {Y})$, so as to solve the conflict.", "We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\\mathbb {S}:\\rm {P} \\rightarrow {R}$ denote a statistic function defined over a distribution $\\rm {P}$. For example, the expectation function $\\mathbb {E}(\\rm {X})$ in $\\mathbb {E}(\\rm {X}_S) \\equiv \\mathbb {E}(\\rm {X})(\\rm {P}_S(\\rm {X}))$ is a concrete instaintiation of $\\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\\mathbb {S}(\\rm {P}_S(\\rm {X}))$ defined in $\\mathcal {L}_{inv}$ with", "Take the CMD metric as an example. In WDIRL, the revised form of ${\\text{CMD}}_K$ is defined by:", "Here, $\\mathbb {E}(\\rm {X}_S|\\rm {Y}_S=i) \\equiv \\mathbb {E}(\\rm {X})(\\rm {P}_S(\\rm {X}|\\rm {Y}=i))$ denotes the expectation of $\\rm {X}$ over distribution $\\rm {P}_S(\\rm {X}|\\rm {Y}=i)$. Note that both $\\rm {P}_S(\\rm {Y}=i)$ and $\\mathbb {E}(\\rm {X}_S|\\rm {Y}_S=i)$ can be estimated using source labeled data, and $\\mathbb {E}(\\rm {X}_T)$ can be estimated using target unlabeled data.", "As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by:", "During model training, $D$ is optimized in the direction to minimize $\\hat{\\mathcal {L}}_d$, while $G$ and $\\mathbf {w}$ are optimized to maximize $\\hat{\\mathcal {L}}_d$. In the following, we denote $\\widehat{\\text{JSD}}(\\rm {P}_S, \\rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning.", "The general task loss in WDIRL is defined by:", "where $\\hat{\\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\\widehat{\\text{CMD}}_K$ and $\\widehat{\\text{JSD}}(\\rm {P}_S, \\rm {P}_T)$." ], [ "In the above step, we align $\\rm {P}(\\rm {X}|\\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\\rm {P}(\\rm {Y})$. Suppose that we have successfully resolved the shift of $\\rm {P}(\\rm {X}|\\rm {Y})$ with $G$, i.e., $\\rm {P}_S(G(\\rm {X})|\\rm {Y})=\\rm {P}_T(G(\\rm {X})|\\rm {Y})$. Then, according to the work of BIBREF29, we have:", "where $\\gamma (\\rm {Y}=i)={\\rm {P}_T(\\rm {Y}=i)}/{\\rm {P}_S(\\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\\gamma (\\rm {Y}=i)$. However, note that $\\gamma (\\rm {Y}=i)$ is exactly the expected class weight $\\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\\gamma (\\rm {Y}=i)$ with the obtained $\\mathbf {w}_i$ in the first step and estimate $\\rm {P}_T(\\rm {Y}|G(\\rm {X}))$ with:", "In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\\hat{\\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\\mathcal {D}_S$ and $\\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\\rm {P}_S(\\rm {Y}|\\rm {X}; \\mathbf {\\Phi })$ and a class weight vector $\\mathbf {w}$; and finally, adjust $\\rm {P}_S(\\rm {Y}|\\rm {X}; \\mathbf {\\Phi })$ using $\\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\\rm {P}_T(\\rm {Y}|\\rm {X}; \\mathbf {\\Phi })$." ], [ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:", "SO: the source-only model trained using source domain labeled data without any domain adaptation.", "CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{CMD}_K$.", "DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{JSD}(\\rm {P}_S, \\rm {P}_T)$.", "$\\text{CMD}^\\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.", "$\\text{DANN}^\\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.", "$\\text{CMD}^{\\dagger \\dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.", "$\\text{DANN}^{\\dagger \\dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.", "$\\text{CMD}^{*}$: a variant of $\\text{CMD}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ (estimate from target labeled data) to $\\mathbf {w}$ and fixes this value during model training.", "$\\text{DANN}^{*}$: a variant of $\\text{DANN}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ to $\\mathbf {w}$ and fixes this value during model training.", "Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\\text{CMD}^{*}$ and $\\text{DANN}^{*}$ can provide the empirical upbound of $\\text{CMD}^{\\dagger \\dagger }$ and $\\text{DANN}^{\\dagger \\dagger }$, respectively. In addition, by comparing performance of $\\text{CMD}^{*}$ and $\\text{DANN}^{*}$ with that of $\\text{SO}$, we can know the effectiveness of the DIRL framework when $\\rm {P}(\\rm {Y})$ dose not shift across domains. By comparing $\\text{CMD}^\\dagger $ with $\\text{CMD}$, or comparing $\\text{DANN}^\\dagger $ with $\\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\\text{CMD}^{\\dagger \\dagger }$ with $\\text{CMD}^{\\dagger }$, or comparing $\\text{DANN}^{\\dagger \\dagger }$ with $\\text{DANN}^{\\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\\text{CMD}^{\\dagger \\dagger }$ with $\\text{CMD}$, or comparing $\\text{DANN}^{\\dagger \\dagger }$ with $\\text{DANN}$, we can know the general effectiveness of our proposed solution." ], [ "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams." ], [ "From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\\rm {P}(\\rm {Y})$ shift, which was evaluated by the max value of $\\rm {P}_S(\\rm {Y}=i)/\\rm {P}_T(\\rm {Y}=i), \\forall i=1, \\cdots , L$. Please refer to Appendix C for more detail about the task design for this study." ], [ "We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\\mathcal {D}_S$ contained 1000 examples of each class, and $\\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$." ], [ "For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\\text{DANN}^{\\dagger }$, $\\text{DANN}^{\\dagger \\dagger }$, and $\\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\\text{CMD}_K$ and $\\widehat{\\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks.", "Hyper-parameter $\\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\\alpha =[1, \\cdots , 10]$ on task B $\\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\\rm {P}(\\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\\alpha $ and $\\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training.", "To initialize $\\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\\rm {P}_{SO}(\\rm {Y}|\\rm {X}; \\mathbf {\\theta }_{SO})$ denote the trained source-only model. We initialized $\\mathbf {w}_i$ by:", "Here, $\\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\\dagger \\dagger }$ over different initializations of $\\mathbf {w}$ on 2 within-group (B$\\rightarrow $D, E$\\rightarrow $K) and 2 cross-group (B$\\rightarrow $K, D$\\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\\rm {P}_{S}(\\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\\dagger \\dagger }$ generally outperformed its CMD counterparts with different initialization of $\\mathbf {w}$. Second, it was better to initialize $\\mathbf {w}$ with a relatively balanced value, i.e., $\\mathbf {w}_i \\rm {P}_S(\\rm {Y}=i) \\rightarrow \\frac{1}{L}$ (in this experiment, $L=2$). Finally, $\\mathbf {w}^0$ was often a good initialization of $\\mathbf {w}$, indicating the effectiveness of the above strategy." ], [ "Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\\text{CMD}^{\\dagger \\dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\\text{DANN}^{\\dagger \\dagger }$ with that of DANN and SO. Third, $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$ consistently outperformed $\\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\\text{CMD}^{\\dagger \\dagger }$ and $\\text{DANN}^{\\dagger \\dagger }$ outperforms $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\\text{Acc}(\\text{CMD})-\\text{Acc}(\\text{SO}))/\\text{Acc}(\\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\\rm {P}(\\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\\rm {P}(\\rm {Y})$ shift. In contrast, our proposed model $\\text{CMD}^{\\dagger \\dagger }$ performed robustly to the varying of $\\rm {P}(\\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\\text{CMD}^{*}$. This again verified the effectiveness of our solution.", "Table TABREF34 reports model performance on the 2 within-group (B$\\rightarrow $D, E$\\rightarrow $K) and the 2 cross-group (B$\\rightarrow $K, D$\\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\\text{CMD}^{\\dagger \\dagger }$ and $\\text{DANN}^{\\dagger \\dagger }$ did not greatly outperform or even slightly underperformed $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\\mathbf {w}$ using $\\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$ also slightly outperforms $\\text{CMD}^{*}$ and $\\text{DANN}^{*}$ on these tasks, respectively." ], [ "In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\\rm {P}(\\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution." ] ], "section_name": [ "Introduction", "Preliminary and Related Work ::: Domain Adaptation", "Preliminary and Related Work ::: Domain Invariant Representation Learning", "Problem of Domain-Invariant Representation Learning", "Problem of Domain-Invariant Representation Learning ::: Remark.", "Weighted Domain Invariant Representation Learning", "Weighted Domain Invariant Representation Learning ::: Align @!START@$\\rm {P}(\\rm {X}|\\rm {Y})$@!END@ with Class Weight", "Weighted Domain Invariant Representation Learning ::: Align @!START@$\\rm {P}(\\rm {Y}|\\rm {X})$@!END@ with Class Weight", "Experiment ::: Experiment Design", "Experiment ::: Dataset and Task Design", "Experiment ::: Dataset and Task Design ::: Binary-Class.", "Experiment ::: Dataset and Task Design ::: Multi-Class.", "Experiment ::: Implementation Detail", "Experiment ::: Main Result", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "2b35c166a73e4a7268e267e0831a9ce47f3a865b" ], "answer": [ { "evidence": [ "According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\\rm {P}(\\rm {Y})$ to DIRL. The key idea of this framework is to first align $\\rm {P}(\\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\\rm {P}(\\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\\rm {P}(\\rm {Y})$ during the alignment of $\\rm {P}(\\rm {X}|\\rm {Y})$. In the second step, it uses $\\mathbf {w}$ to reweigh the supervised classifier $\\rm {P}_S(\\rm {Y}|\\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively.", "The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\\rm {P}(\\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$. Specifically, we hope that:", "and we denote $\\mathbf {w}^*$ the value of $\\mathbf {w}$ that makes this equation hold. We shall see that when $\\mathbf {w}=\\mathbf {w}^*$, DIRL is to align $\\rm {P}_S(G(\\rm {X})|\\rm {Y})$ with $\\rm {P}_T(G(\\rm {X})|\\rm {Y})$ without the shift of $\\rm {P}(\\rm {Y})$. According to our analysis, we know that due to the shift of $\\rm {P}(\\rm {Y})$, there is a conflict between the training objects of the supervised learning $\\mathcal {L}_{sup}$ and the domain-invariant learning $\\mathcal {L}_{inv}$. And the conflict degree will decrease as $\\rm {P}_S(\\rm {Y})$ getting close to $\\rm {P}_T(\\rm {Y})$. Therefore, during model training, $\\mathbf {w}$ is expected to be optimized toward $\\mathbf {w}^*$ since it will make $\\rm {P}(\\rm {Y})$ of the weighted source domain close to $\\rm {P}_T(\\rm {Y})$, so as to solve the conflict." ], "extractive_spans": [ "To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$" ], "free_form_answer": "", "highlighted_evidence": [ "According to the above analysis", "According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\\rm {P}(\\rm {Y})$ to DIRL. The key idea of this framework is to first align $\\rm {P}(\\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\\rm {P}(\\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. ", "The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\\rm {P}(\\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$. Specifically, we hope that:\n\nand we denote $\\mathbf {w}^*$ the value of $\\mathbf {w}$ that makes this equation hold. ", "We shall see that when $\\mathbf {w}=\\mathbf {w}^*$, DIRL is to align $\\rm {P}_S(G(\\rm {X})|\\rm {Y})$ with $\\rm {P}_T(G(\\rm {X})|\\rm {Y})$ without the shift of $\\rm {P}(\\rm {Y})$. According to our analysis, we know that due to the shift of $\\rm {P}(\\rm {Y})$, there is a conflict between the training objects of the supervised learning $\\mathcal {L}_{sup}$ and the domain-invariant learning $\\mathcal {L}_{inv}$. And the conflict degree will decrease as $\\rm {P}_S(\\rm {Y})$ getting close to $\\rm {P}_T(\\rm {Y})$. Therefore, during model training, $\\mathbf {w}$ is expected to be optimized toward $\\mathbf {w}^*$ since it will make $\\rm {P}(\\rm {Y})$ of the weighted source domain close to $\\rm {P}_T(\\rm {Y})$, so as to solve the conflict." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "3ddf9eb0e77624cc9506fe7d0a5006c8b8560e3b" ], "answer": [ { "evidence": [ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:", "SO: the source-only model trained using source domain labeled data without any domain adaptation.", "CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{CMD}_K$.", "DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{JSD}(\\rm {P}_S, \\rm {P}_T)$.", "$\\text{CMD}^\\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.", "$\\text{DANN}^\\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.", "$\\text{CMD}^{\\dagger \\dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.", "$\\text{DANN}^{\\dagger \\dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.", "$\\text{CMD}^{*}$: a variant of $\\text{CMD}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ (estimate from target labeled data) to $\\mathbf {w}$ and fixes this value during model training.", "$\\text{DANN}^{*}$: a variant of $\\text{DANN}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ to $\\mathbf {w}$ and fixes this value during model training.", "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.", "From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\\rm {P}(\\rm {Y})$ shift, which was evaluated by the max value of $\\rm {P}_S(\\rm {Y}=i)/\\rm {P}_T(\\rm {Y}=i), \\forall i=1, \\cdots , L$. Please refer to Appendix C for more detail about the task design for this study.", "We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\\mathcal {D}_S$ contained 1000 examples of each class, and $\\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$.", "Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\\text{CMD}^{\\dagger \\dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\\text{DANN}^{\\dagger \\dagger }$ with that of DANN and SO. Third, $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$ consistently outperformed $\\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\\text{CMD}^{\\dagger \\dagger }$ and $\\text{DANN}^{\\dagger \\dagger }$ outperforms $\\text{CMD}^{\\dagger }$ and $\\text{DANN}^{\\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\\text{Acc}(\\text{CMD})-\\text{Acc}(\\text{SO}))/\\text{Acc}(\\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\\rm {P}(\\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\\rm {P}(\\rm {Y})$ shift. In contrast, our proposed model $\\text{CMD}^{\\dagger \\dagger }$ performed robustly to the varying of $\\rm {P}(\\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\\text{CMD}^{*}$. This again verified the effectiveness of our solution." ], "extractive_spans": [ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from." ], "free_form_answer": "", "highlighted_evidence": [ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:\n\nSO: the source-only model trained using source domain labeled data without any domain adaptation.\n\nCMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{CMD}_K$.\n\nDANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\\mathcal {L}_{inv}$ with $\\text{JSD}(\\rm {P}_S, \\rm {P}_T)$.\n\n$\\text{CMD}^\\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.\n\n$\\text{DANN}^\\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.\n\n$\\text{CMD}^{\\dagger \\dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.\n\n$\\text{DANN}^{\\dagger \\dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.\n\n$\\text{CMD}^{*}$: a variant of $\\text{CMD}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ (estimate from target labeled data) to $\\mathbf {w}$ and fixes this value during model training.\n\n$\\text{DANN}^{*}$: a variant of $\\text{DANN}^{\\dagger \\dagger }$ that assigns $\\mathbf {w}^*$ to $\\mathbf {w}$ and fixes this value during model training.", "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. ", "From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. ", "We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). ", "Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "e925081c6530399b223e28e11a544b8056ced8eb" ], "answer": [ { "evidence": [ "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.", "Experiment ::: Dataset and Task Design ::: Binary-Class.", "From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\\rm {P}(\\rm {Y})$ shift, which was evaluated by the max value of $\\rm {P}_S(\\rm {Y}=i)/\\rm {P}_T(\\rm {Y}=i), \\forall i=1, \\cdots , L$. Please refer to Appendix C for more detail about the task design for this study.", "Experiment ::: Dataset and Task Design ::: Multi-Class.", "We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\\mathcal {D}_S$ contained 1000 examples of each class, and $\\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$." ], "extractive_spans": [], "free_form_answer": "12 binary-class classification and multi-class classification of reviews based on rating", "highlighted_evidence": [ "We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.\n\nExperiment ::: Dataset and Task Design ::: Binary-Class.\nFrom this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\\rightarrow $D, B$\\rightarrow $E, B$\\rightarrow $K, D$\\rightarrow $B, D$\\rightarrow $E, D$\\rightarrow $K, E$\\rightarrow $B, E$\\rightarrow $D, E$\\rightarrow $K, K$\\rightarrow $B, K$\\rightarrow $D, K$\\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\\rm {P}(\\rm {Y})$ shift, which was evaluated by the max value of $\\rm {P}_S(\\rm {Y}=i)/\\rm {P}_T(\\rm {Y}=i), \\forall i=1, \\cdots , L$. Please refer to Appendix C for more detail about the task design for this study.\n\nExperiment ::: Dataset and Task Design ::: Multi-Class.\nWe additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\\mathcal {D}_S$ contained 1000 examples of each class, and $\\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\\mathcal {D}_T$." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "two", "two", "two" ], "paper_read": [ "no", "no", "no" ], "question": [ "How are different domains weighted in WDIRL?", "How is DIRL evaluated?", "Which sentiment analysis tasks are addressed?" ], "question_id": [ "37016cc987d33be5ab877013ef26ec7239b48bd9", "b3dc6d95d1570ad9a58274539ff1def12df8f474", "cc5d3903913fa2e841f900372ec74b0efd5e0c71" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Mean accuracy ± standard deviation over five runs on the 12 binary-class cross-domain tasks.", "Figure 1: Mean accuracy of WCMD†† over different initialization of w. The empirical optimum value of w makes w1PS(Y = 1) = 0.75. The dot line in the same color denotes performance of the CMD model and ‘w0’ annotates performance of WCMD†† when initializing w with w0.", "Figure 2: Relative improvement over the SO baseline under different degrees of P(Y) shift on the B→D and B →K binary-class domain adaptation tasks.", "Table 2: Mean accuracy ± standard deviation over five runs on the 2 within-group and 2 cross-group multiclass domain-adaptation tasks." ], "file": [ "7-Table1-1.png", "7-Figure1-1.png", "8-Figure2-1.png", "8-Table2-1.png" ] }
[ "Which sentiment analysis tasks are addressed?" ]
[ [ "1909.08167-Experiment ::: Dataset and Task Design ::: Multi-Class.-0", "1909.08167-Experiment ::: Dataset and Task Design-0", "1909.08167-Experiment ::: Dataset and Task Design ::: Binary-Class.-0" ] ]
[ "12 binary-class classification and multi-class classification of reviews based on rating" ]
400
1911.03562
The State of NLP Literature: A Diachronic Analysis of the ACL Anthology
The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP). This paper examines the literature as a whole to identify broad trends in productivity, focus, and impact. It presents the analyses in a sequence of questions and answers. The goal is to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. Special emphasis is laid on the demographics and inclusiveness of NLP publishing. Notably, we find that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also show that, on average, female first authors are cited less than male first authors, even when controlling for experience. We hope that recording citation and participation gaps across demographic groups will encourage more inclusiveness and fairness in research.
{ "paragraphs": [ [ "The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP.", "This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations.", "We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender).", "", "Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence.", "", "Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020.", "", "Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review.", "", "Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21.", "", "Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page.", "The analyses presented here are also available as a series of blog posts." ], [ "Q. How big is the ACL Anthology (AA)? How is it changing with time?", "A. As of June 2019, AA had $\\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018.", "Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC.", "Q. How many people publish in the ACL Anthology (NLP conferences)?", "A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years:", "Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be:", "Q. How many people are actively publishing in NLP?", "A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years.", "#people who published at least one paper in 2017 and 2018 (2 years): $\\sim $12k (11,957 to be precise)", "#people who published at least one paper 2015 through 2018 (4 years):$\\sim $17.5k (17,457 to be precise)", "Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years.", "Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers?", "A. See Figure FIGREF8.", "Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing.", "Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable.", "Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues?", "A. # ACL (main conference papers) as of June 2018: 4,839", "The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.).", "Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers." ], [ "NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity)." ], [ "The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP.", "The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\\ge $99%) and 29,873 first names that are strongly associated with males (probability $\\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*.", "", "Note the following caveats associated with this analysis:", "", "The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US.", "", "Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names.", "", "The dataset only records names associated with two genders.", "", "The approach presented here is meant to be an approximation in the absence of true gender information.", "Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time?", "A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green.", "", "Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\\sim $31%). On average male authors had a slightly higher average number of publications than female authors.", "To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences.", "FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title.", "Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data." ], [ "While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18.", "Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper?", "A. Average NLP Academic Age of people that published in 2018: 5.41 years", "Median NLP Academic Age of people that published in 2018: 2 years", "Percentage of 2018 authors that published their first AA paper in 2018: 44.9%", "Figure FIGREF24 shows how these numbers have changed over the years.", "Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate.", "Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on?", "A. See Figure FIGREF25.", "Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history." ], [ "Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language.", "We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages.", "We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.)", "Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green.", "Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper.", "We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world." ], [ "Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research.", "Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable.", "Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions.", "Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): \"A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area\".", "If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different.", "It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go.", "Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades.", "Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time?", "A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers.", "Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing.", "The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure.", "Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were.", "Q. What are the most frequent unigrams and bigrams in the titles of recent papers?", "A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection).", "Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task.", "The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams.", "FigureFIGREF31 shows the timeline graph for parsing.", "Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term.", "FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation:", "Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation." ], [ "Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations", "The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor.", "It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things.", "Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations.", "Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact.", "In this section, we examine citations of AA papers. We focus on two aspects:", "Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations.", "Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc.", "Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’." ], [ "Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades?", "A. $\\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations.", "Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come.", "Q. What are the most cited papers in AA'?", "A. Figure FIGREF37 shoes the most cited papers in the AA'.", "Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers.", "In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy.", "Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials?", "A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there.", "Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers.", "The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers.", "Q. What are the most cited AA' papers in the last decade?", "A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online.", "Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations." ], [ "Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans?", "A. Total citations for papers published between 1990 and 1994: $\\sim $92k", "Average citations for papers published between 1990 and 1994: 94.3", "Figure FIGREF41 shows the numbers for various time spans.", "Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations." ], [ "Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers?", "A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians.", "Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010.", "System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism.", "It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers.", "Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten.", "Q. What are the average number of citations received by the long and short ACL main conference papers, respectively?", "A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers.", "Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers.", "Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers?", "A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.)", "Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010.", "When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING.", "Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times?", "A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.)", "Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin.", "The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations.", "Q. What are the citation bin percentages for individual venues and paper types?", "A. See Figure FIGREF51.", "Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above).", "CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations.", "CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates.", "Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP.", "About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations." ], [ "Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations?", "A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.)", "Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation.", "There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5).", "Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations.", "Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations." ], [ "In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following:", "Areas of research: To better understand research contributions in the context of the area where the contribution is made.", "Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc.", "Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations.", "Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences.", "People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases.", "This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair." ], [ "We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.)", "First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.)", "Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors.", "Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience?", "A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers.", "Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position).", "The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender", "As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\\ge $99%) and 29,873 first names that are strongly associated with males (probability $\\ge $99%).", "Q. On average, are women cited less than men?", "A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60.", "Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names.", "The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next.", "Q. How has the citation gap across genders changed over the years?", "A. Figure FIGREF61 (left side) shows the citation statistics across four time periods.", "Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap.", "It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.)", "Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors?", "A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.)", "Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period.", "Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)?", "A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown.", "Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP." ], [ "This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts).", "Acknowledgments", "This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource." ] ], "section_name": [ "Introduction", "Size", "Demographics (focus of analysis: gender, age, and geographic diversity)", "Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender", "Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age", "Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages)", "Areas of Research", "Impact", "Impact ::: #Citations and Most Cited Papers", "Impact ::: Average Citations by Time Span", "Impact ::: Aggregate Citation Statistics, by Paper Type and Venue", "Impact ::: Citations to Papers by Areas of Research", "Correlation of Age and Gender with Citations", "Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "2b58e703bb70e489b5e660be7244333759ea1c28" ], "answer": [ { "evidence": [ "Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP." ], "extractive_spans": [ "sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation" ], "free_form_answer": "", "highlighted_evidence": [ "Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "6d38f28afdffd7212c761c6b1a166173fe205526" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 33 The most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers." ], "extractive_spans": [], "free_form_answer": "machine translation, statistical machine, sentiment analysis", "highlighted_evidence": [ "FLOAT SELECTED: Figure 33 The most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "a105968b9c439c84c9087f4f72329709a2532340" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 25 Average citations for papers published 1965–2016 (left side) and 2010–2016 (right side), grouped by venue and paper type." ], "extractive_spans": [], "free_form_answer": "CL Journal and EMNLP conference", "highlighted_evidence": [ "FLOAT SELECTED: Figure 25 Average citations for papers published 1965–2016 (left side) and 2010–2016 (right side), grouped by venue and paper type." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "e7604084d3a172f03c138a23141f72bcf7864062" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 11 A treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green." ], "extractive_spans": [], "free_form_answer": "English, Chinese, French, Japanese and Arabic", "highlighted_evidence": [ "FLOAT SELECTED: Figure 11 A treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "d2d24e5085c12e185863a383004c201e8db64e70" ], "answer": [ { "evidence": [ "We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender)." ], "extractive_spans": [ "size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender)" ], "free_form_answer": "", "highlighted_evidence": [ "We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "dabb7372be31d7e3e49e882fcd8cf272748195a0" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 30 Aggregate citation statistics by academic age." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Figure 30 Aggregate citation statistics by academic age." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "8e037ff9139ef089ba4848867ae7b5ae11f4d08a" ], "answer": [ { "evidence": [ "A. As of June 2019, AA had $\\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018." ], "extractive_spans": [ "44,896 articles" ], "free_form_answer": "", "highlighted_evidence": [ "As of June 2019, AA had $\\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "five", "five", "five", "five", "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "Which NLP area have the highest average citation for woman author?", "Which 3 NLP areas are cited the most?", "Which journal and conference are cited the most in recent years?", "Which 5 languages appear most frequently in AA paper titles?", "What aspect of NLP research is examined?", "Are the academically younger authors cited less than older?", "How many papers are used in experiment?" ], "question_id": [ "c95fd189985d996322193be71cf5be8858ac72b5", "4a61260d6edfb0f93100d92e01cf655812243724", "5c95808cd3ee9585f05ef573b0d4a52e86d04c60", "b6f5860fc4a9a763ddc5edaf6d8df0eb52125c9e", "7955dbd79ded8ef4ae9fc28b2edf516320c1cb55", "6bff681f1f6743ef7aa6c29cc00eac26fafdabc2", "205163715f345af1b5523da6f808e6dbf5f5dd47" ], "question_writer": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ], "search_query": [ "", "", "", "", "", "", "" ], "topic_background": [ "research", "research", "research", "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1 The number of AA papers published in each of the years from 1965 to 2018.", "Figure 2 The number of authors of AA papers from 1965 to 2018.", "Figure 3 Number of AA papers by type.", "Figure 4 The number of main conference papers for various venues and paper types (workshop papers, demos, etc.).", "Figure 5 Female first author (FFA) percentage over the years.", "Figure 6 FFA percentages for papers that have the word discourse in the title.", "Figure 7 FFA percentages for papers that have the word neural in the title.", "Figure 8 Lists of terms with the highest and lowest FFA percentages, respectively.", "Figure 9 Graphs showing average academic age, median academic age, and percentage of first-time publishers in AA over time.", "Figure 10 The distribution of authors in academic age bins for papers published 2011–2018.", "Figure 11 A treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green.", "Figure 12 The most common unigrams and bigrams in the titles of AA papers published 1980–2019.", "Figure 13 The most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection).", "Figure 14 The timeline graph for parsing.", "Figure 15 The timelines for three bigrams statistical machine, neural machine, and machine translation.", "Figure 16 A timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. The bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received.", "Figure 17 The most cited papers in AA’.", "Figure 18 The most cited AA’ papers in the 2010s.", "Figure 19 Left-side graph: Total number of citations received by AAâĂŹ papers in various 5-year time spans. Right-side graph 2: Average citations per paper from various time spans.", "Figure 20 Average citations by paper type when considering papers published 1965âĂŞ2016.", "Figure 21 Average citations by paper type when considering papers published 2010–2016.", "Figure 22 Median citations by paper type when considering papers published 1965–2016", "Figure 23 Median citations by paper type when considering papers published 2010–2016.", "Figure 24 Average and median citations for long and short papers.", "Figure 25 Average citations for papers published 1965–2016 (left side) and 2010–2016 (right side), grouped by venue and paper type.", "Figure 26 The percentage of AAâĂŹ papers in various citation bins.", "Figure 27 The citation bin percentages for individual venues and paper types.", "Figure 28 The top 50 title bigrams ordered by decreasing number of total citations.", "Figure 29 The lists of top 25 bigrams ordered by average citations.", "Figure 30 Aggregate citation statistics by academic age.", "Figure 31 Average citations received by female and male first authors.", "Figure 32 Citation gap across genders for papers: (a) published in different time spans, (b) by academic age.", "Figure 33 The most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "4-Figure4-1.png", "6-Figure5-1.png", "7-Figure6-1.png", "7-Figure7-1.png", "7-Figure8-1.png", "8-Figure9-1.png", "9-Figure10-1.png", "10-Figure11-1.png", "12-Figure12-1.png", "13-Figure13-1.png", "13-Figure14-1.png", "13-Figure15-1.png", "15-Figure16-1.png", "16-Figure17-1.png", "17-Figure18-1.png", "18-Figure19-1.png", "18-Figure20-1.png", "18-Figure21-1.png", "19-Figure22-1.png", "19-Figure23-1.png", "19-Figure24-1.png", "20-Figure25-1.png", "21-Figure26-1.png", "22-Figure27-1.png", "23-Figure28-1.png", "24-Figure29-1.png", "26-Figure30-1.png", "26-Figure31-1.png", "28-Figure32-1.png", "29-Figure33-1.png" ] }
[ "Which 3 NLP areas are cited the most?", "Which journal and conference are cited the most in recent years?", "Which 5 languages appear most frequently in AA paper titles?" ]
[ [ "1911.03562-29-Figure33-1.png" ], [ "1911.03562-20-Figure25-1.png" ], [ "1911.03562-10-Figure11-1.png" ] ]
[ "machine translation, statistical machine, sentiment analysis", "CL Journal and EMNLP conference", "English, Chinese, French, Japanese and Arabic" ]
401
1912.10435
BERTQA -- Attention on Steroids
In this work, we extend the Bidirectional Encoder Representations from Transformers (BERT) with an emphasis on directed coattention to obtain an improved F1 performance on the SQUAD2.0 dataset. The Transformer architecture on which BERT is based places hierarchical global attention on the concatenation of the context and query. Our additions to the BERT architecture augment this attention with a more focused context to query (C2Q) and query to context (Q2C) attention via a set of modified Transformer encoder units. In addition, we explore adding convolution-based feature extraction within the coattention architecture to add localized information to self-attention. We found that coattention significantly improves the no answer F1 by 4 points in the base and 1 point in the large architecture. After adding skip connections the no answer F1 improved further without causing an additional loss in has answer F1. The addition of localized feature extraction added to attention produced an overall dev F1 of 77.03 in the base architecture. We applied our findings to the large BERT model which contains twice as many layers and further used our own augmented version of the SQUAD 2.0 dataset created by back translation, which we have named SQUAD 2.Q. Finally, we performed hyperparameter tuning and ensembled our best models for a final F1/EM of 82.317/79.442 (Attention on Steroids, PCE Test Leaderboard).
{ "paragraphs": [ [ "Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train." ], [ "The SQUAD2.0 creators proposed this dataset as a means for networks to actually understand the text they were being interrogated about rather than simply being extractive parsers. Many networks stepped up to the challenge including BERT, BIDAF, and QANET. BERT is a fully feed forward network that is based on the transformer architecture BIBREF5. The base BERT model has 12 transformer encoder layers that terminate in an interchangeable final layer which can be finetuned to the specific task. We chose this network as our baseline because of its use of contextual embeddings and global attention and because of the speed advantage derived from an RNN free architecture. We derived inspiration for our modifications from the BIDAF and QANET models. BIDAF is an LSTM based network that uses character, word, and contextual embeddings which are fed through Context-to-Query (C2Q) and Query-to-Context (Q2C) layers. The final logits are derived from separate Start and End output layers, as opposed to BERT which produces these logits together. Our C2Q/Q2C addition to BERT and the Dense Layer/LSTM based separate final Start and End logit prediction layer were inspired by this paper. We also refered to the QANET model, which is also a fully feed forward network that emphasizes the use of convolutions to capture the local structure of text. Based on this paper, we created a convolutional layer within the C2Q/Q2C architecture to add localized information to BERT's global attention and the C2Q/Q2C coattention.", "In addition to referencing these papers that helped us build a successful model, we also explored many other papers which either didn't work with our transformer based model or simply didn't work in combination with our additions to BERT. The three main papers from which we tried to gain ideas are U-Net: Machine Reading Comprehension with Unanswerable Questions BIBREF6, Attention-over-Attention Neural Networks for Reading Comprehension BIBREF7, and FlowQA: Grasping Flow in History for Conversational Machine Comprehension BIBREF8. We tried implementing the multitask learning methodology presented in U-Net by passing the [CLS] token through a series of convolutional layers to create a probability of whether the question has an answer. We combined this prediction with the prediction of Start and End logits by combining the logits' crossentropy loss and the [CLS] binary crossentropy loss. Unfortunately, this additional loss seemed to be hindering the network's learning ability. We conjecture that this type of multitask learning would benefit from full training instead of the finetuning we were restricted to doing because of resources and time. We looked to Attention-over-Attention as a source of additional ways of injecting attention into our network. Attention-over-Attention has a dot-product based attention mechanism that attends to attention vectors instead of embedding vectors. We believe this method did not help in our case because BERT works with the Context and Query as part of the same vector while the Attention-over-Attention model requires completely uncoupled Context and Query vectors. As a side note, we do separate the Context and Query vector derived from BERT before the coattention layers of our model, but these layers are not negatively affected by the fact that these separated vectors contain 'mixed' information between the Context and Query. Finally, we explored the FlowQA paper which proposed combining embeddings from multiple layers as an input to the final prediction layer. We implemented this idea by combining embeddings from multiple BERT layers as an input to our final prediction layer. This final layer was simply an additional transformer encoder and we think that the encoder does not have the LSTM's ability of being able to aggregate information from multiple sources." ], [ "We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below." ], [ "The base BERT network, the baseline for this project, is built with 12 Transformer encoder blocks. These encoder blocks contain multi-head attention and a feed forward network. Each head of the multi-head attention attends to the concatenation of the context and query input and thus forms a global attention output. The output of each Transformer encoder is fed in to the next layer, creating an attention hierarchy. The benefit of this construction is that the model has access to the entire query and context at each level allowing both embeddings to learn from each other and removing the long term memory bottleneck faced by RNN based models. BERTQA uses directed coattention between the query and context, as opposed to attending to their concatenation (Figure FIGREF2). Our architecture consists of a set of 7 directed coattention blocks that are inserted between the BERT embeddings and the final linear layer before loss calculation.", "The BERT embeddings are masked to produce seperate query and context embedding vectors (Equations DISPLAY_FORM3 , DISPLAY_FORM4).", "Where E is the contextualized embeddings derived from BERT, m is the mask, and c and q are the context and query respectively.", "$E_q$ and $E_c$ are then projected through linear layers to obtain key, value, and query vectors (Equation DISPLAY_FORM5).", "Where Q, K, and V are the query, key and value vectors.", "The Q, K, and V vectors are used in scaled dot-product attention (Equation DISPLAY_FORM6) to create the separate Context-to-Query (C2Q) and Query-to-Context (Q2C) attention vectors.", "Where y is q and z is c for Q2C and y is c and z is q for C2Q.", "The C2Q attention vector is summed with the query input and the Q2C attention vector is summed with the context input via a skip connection. Each sum vector is then pushed through a fully connected block and then is added back to the output of the fully connected block via another skip connection. Each sum is followed by a layer-wise normalization. The two resulting 3D C2Q and Q2C vectors are concatenated along the third (embedding) dimension which are combined by two 1D convolutions to create the final 3D vector representing the combination of the C2Q and Q2C attention. We use two convolution layers here so that the concatenated dimension is reduced more gradually so that too much information is not lost. This vector then goes into a final attention head to perform separate self attention pre-processing for the Start logit and End logit prediction layers. The Start logit is generated by a linear layer and the End logit is generated by the output of an LSTM which takes the concatenation of the start span and end span embeddings as an input. We used the BERT architecture code written in Pytorch from the HuggingFace github BIBREF3. We wrote our own code for all of the subsequent architecture." ], [ "To refine the focus of the attention further, we experimented with convolutional feature extraction to add localized information to the coattention output. We added four convolutional layers within the coattention architecture (Figure FIGREF8). The input to these layers were the BERT embeddings and the outputs were added to the outputs of the multi-head attention layers in the coattention architecture and then layer-wise normalized. This combination of coattention and local information provides a hierarchical understanding of the question and context. By itself, BERT provides information about the question and context as a unit, while the coattention extracts information from both the question and context relative to each other. The convolutions extract local features within the question and context to add localized information to the attention and embedding meanings. After adding the separate start and end logic, we found that the localized feature extraction did not allow an improvement in the network's learning via an ablation study where we ran the network without the convolutional layers. We speculate that the convolutions prevented improvement beyond a certain F1 score because they are lossy compressors and the information lost by the convolutions might be essential to downstream learning." ], [ "As shown in Figure FIGREF2, we have a skip connection from the BERT embedding layer combined with the convolved directed co-attention output (C2Q and Q2C). We experimented with 3 skip connection configurations: Simple ResNet inspired Skip, Self-Attention Transformer Skip, and a Highway Network. Of these, the Self-Attention Transformer based skip worked best initially. However, when we combined this skip connection with our logit prediction logic, the network was no longer able learn as well. The Simple ResNet inspired skip BIBREF11 connection solved this issue. It seems that the transformer skip connection followed by the additional transformer encoder blocks that form the beginning of the logit prediction logic processed the BERT embeddings too much and thus lost the benefit of the skip connection. Therefore, we decided to use a Simple ResNet inspired skip alongside the self attention heads for logit prediction. This allows the directed co-attention layers to learn distinct information coming from BERT embeddings via the skip and allows for efficient backpropagation to the BERT layers." ], [ "Inspired by the work presented in BIBREF12 where the authors present a way of generating new questions out of context and after observing the patterns in SQuAD 2.0 we realized there is a lot of syntatic and gramatical variance in the questions written by cloud workers. To help our network generalize better to these variations we decided to augment the dataset by paraphrasing the questions in the SQuAD training set. We applied backtranslation using Google Cloud Translation (NMT) API BIBREF13 to translate the sentence from English to French and then do a back translation to English, essentially 2 translations per question (Figure FIGREF11).", "We call our augmented dataset SQUAD 2.Q and make 3 different versions (35%, 50%, and 100% augmentation) alongside code to generate them publicly available on our github BIBREF4." ], [ "Hyperparameter tuning has been an on-going process for our experiments. Here are the following hyperparameters we have tweaked and tuned for on Bert Base:", "Number of Directed co-Attention layers - We tried various numbers of layers and we found out that N=7 for the co-attention layers gave us optimal performance while being able to fit the model on 2 GPUs (3 F1 score improvement by itself).", "Max Sequence length - After initial experiments with default sequence length (context + query token) 384, we switched to a sequence length of 512. This gave us a 0.6 F1 improvement on our model.", "Batch Size - Default: 12, We had to use a batch size of 6 for all our experiments due to resource constraints and out of memory issues on the GPU for any larger batch size.", "Number of epochs - Default: 2 On increasing the number of epochs we saw a significant degradation in performance (-3 F1 score), we attribute this to the fact that the model starts to overfit to the training data with high variance and since the batch size is smaller the gradient updates could be noisy not allowing it to optimally converge.", "Learning Rate - Default: 3e-5 We wrote a script to help us find the optimal learning rate using grid search and found the optimal learning rates for SQuAD 2.0 and SQuAD 2.Q respectively for batch size of 6." ], [ "We applied what we learned from the previous five subsections to the large BERT model, which has twice as many layers as the base model. In order to fit this model on our GPU and still use 7 of our coattention layers, we were limited to two examples on the GPU at a time. However, we also found that BERT large requires a larger batch size to get a good performance. As such, we left the batch size 6 as with the base model and used a gradient accumulation of 3 so that only two examples were on the GPU at a time. Additionally, the large model is very sensitive to the learning rate, and the rate of 3e-5 which we used with the smaller model no longer worked. We ran the model on a subset of the data with various learning rates and found that 1.1e-5 to 1.5e-5 works the best for the large model depending on the dataset used (SQuAD 2.0 or SQUAD 2.Q).", "After experimenting with multiple combinations of the ideas we described above, we ensembled our three best networks to create our final predictions. The configurations of our three best networks are described in Table TABREF19.", "We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer." ], [ "Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8).", "The results presented above verify our hypothesis that adding layers of directed attention to BERT improves its performance. The C2Q/Q2C network produced a significant improvement in the No Answer F1 score while causing a symmetric drop in the Has Answer F1 score. The C2Q/Q2C network attends the context relative to the query and vice versa instead of as a concatenated whole. This method of attention provides more information regarding whether there is an answer to the question in the context than the original BERT attention. The skip connections improved the scores further by adding the BERT embeddings back in to the coattention vectors and providing information that may have been lost by the C2Q/Q2C network in addition to providing a convenient path for backpropagation to the BERT embedding layers. The skip connection containing the transformer provides minimal gains while adding a significant overhead to runtime. Therefore, we built the final convolutional experiments on the Simple Skip architecture. The localized feature extraction within the coattention network produced the best results in the base model, but prevented an improvement in our modified BERT large model.", "Table TABREF21 shows the F1 and EM scores obtained for the experiments on the large model. The models labeled 1, 2, and 3 are described in detail in Section 3.6.", "Each of the models built on BERT large used our augmented dataset in addition to the coattention architecture, simple skip connection, and separate start and end logit logic. The Model 1 results show that a moderately augmented (35%) data set helps the training since both unaugmented and highly augmented (50%) models did not perform as well. It seems that adding too much augmented data reduces the F1 because the augmented data is noisy relative to the original data. The performance difference between Model 1 and 2 support the use of the LSTM in creating the End logit predictions. The LSTM is successfully combining the information from the Start logit and the End embeddings to provide a good input to the End logit linear layer. The ensemble model performed the best by far due to a significant increase in the no answer F1 which can be attributed to the ensembling method which is biased towards models that predict no answer.", "We investigated the attention distributions produced by our proposed model by modifying the open source code from BertViz BIBREF14 . For the case where the question has an answer in the context (Figure FIGREF22), the attention heads produce activation around the answer phrase \"in the 10th and 11th centuries\". In the case where there is no answer in the context, the attention heads produce considerable activation on the [SEP] word-piece which is outside the context span.", "As seen in Figure FIGREF25, we conducted an error analysis over different question types. Note that questions that did not fit into the 7 bins were classified as \"Other\". An example of a question in the \"Other\" category would be an \"Is it?\" question which is a minority set in SQUAD 2.0. Over the baseline, our model pretty much presents an overall improvement across the board in the different type of questions in the SQuAD 2.0 dev set. In the case of \"Which\" questions, our model goes wrong 69 times where as the baseline model goes wrong 64 times, a very small numeric difference. However, for the \"What\" questions the baseline model produces incorrect outputs for 776 examples while our model produces 30 fewer incorrect outputs. The reason for this lapse appears to be related to data augmentation where we observed that many a times \"Which\" was backtranslated as \"What\" and vice versa. Thus, the questions in these two classes are mixed and a completely accurate analysis of improvements in these classes is not possible.", "Figure FIGREF26 shows an example cropped context and question that our ensemble model answers correctly while the BERT large model answers incorrectly. It seems that the BERT large model combined the words spirit and Christian to answer this question even thought the word spirit belongs to martial and the word Christian belongs to piety. Our model was able to keep the paired words together and realize that the question has no answer. We believe that our model was able to get the correct answer because of the coattention which is able to keep the words paired together correctly.", "Overall, our model has shown marked qualitative and quantitative improvement over the base and large BERT models. Our SQUAD 2.Q dataset helps improve performance by mimicking the natural variance in questions present in the SQUAD 2.0 dataset. BertQA produces a significant improvement in the No Answer F1 by being able to maintain associations between words via coattention, as seen in Figure FIGREF26, and by ensembling our three best models." ], [ "We present a novel architectural scheme to use transformers to help the network learn directed co-attention which has improved performance over BERT baseline. We experimented with several architectural modifications and presented an ablation study. We present SQuAD 2.Q, an augmented dataset, developed using NMT backtranslation which helps our model generalize better over syntatic and grammatical variance of human writing. Our ensemble model gives a 3.5 point improvement over the Bert Large dev F1. We learned a lot about neural architectural techniques through experimenting with various model configurations. We also learned about how different model components do or don't work together and that some architectural choices like convolutional layers that work so well in computer vision do not necessarily work as well in NLP.", "We would like to improve on the quality of data augmentation to limit noise in the dataset and further extend this work to context augmentation as well. Apart from that, we would also like to try recent architectures like Transformer-XL BIBREF15 which has potential to offer additional improvement on HasAns F1 by remembering long term dependencies and evaluate how it scales with our model as a next step. Given sufficient compute resources we would also like to pre-train our C2Q and Q2C layers similar to BERT pre-training to learn deeper language semantics and then fine-tune it on the SQuAD dataset for the task of Question Answering.", "We would like to thank the CS224n Team for all the support throughout the course and also thank the folks at Azure for providing us with Cloud credits." ] ], "section_name": [ "Introduction", "Related Work", "Methods", "Methods ::: BERTQA - Directed Coattention", "Methods ::: Localized Feature Extraction", "Methods ::: Skip Connections", "Methods ::: Data Augmentation - SQuAD 2.Q", "Methods ::: Hyperparameter Tuning", "Methods ::: BERT Large and Ensembling", "Results and Analysis", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "2b838f331b408f376e6f0bf242ec8cc7c8841852" ], "answer": [ { "evidence": [ "We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer." ], "extractive_spans": [ "choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer" ], "free_form_answer": "", "highlighted_evidence": [ "We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "7d4e310ebb2beb7d8b2f7fb8b4f940a8551e30fd" ], "answer": [ { "evidence": [ "We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below." ], "extractive_spans": [ "number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks" ], "free_form_answer": "", "highlighted_evidence": [ "Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "8e1b248f7f8a7cf3e0311bb07dad853676ef5725" ], "answer": [ { "evidence": [ "Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8).", "FLOAT SELECTED: Table 2: Performance results for experiments relative to BERT base" ], "extractive_spans": [], "free_form_answer": "Simple Skip improves F1 from 74.34 to 74.81\nTransformer Skip improes F1 from 74.34 to 74.95 ", "highlighted_evidence": [ "Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip).", "FLOAT SELECTED: Table 2: Performance results for experiments relative to BERT base" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no" ], "question": [ "What ensemble methods are used for best model?", "What hyperparameters have been tuned?", "How much F1 was improved after adding skip connections?" ], "question_id": [ "8d989490c5392492ad66e6a5047b7d74cc719f30", "a7829abed2186f757a59d3da44893c0172c7012b", "707db46938d16647bf4b6407b2da84b5c7ab4a81" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Proposed C2Q and Q2C directed coattention architecture", "Figure 2: Convolutional Layers for Local Attention (in channels, out channels, kernel size)", "Figure 3: Back Translation to augment the SQuAD dataset", "Table 1: Model Configurations; BS = Batch Size, GA = Gradient Accum., LR = Learning Rate", "Table 2: Performance results for experiments relative to BERT base", "Table 3: Performance results for experiments relative to BERT large", "Figure 4: Visualization of attention produced by our model", "Figure 5: Percent error for different question types", "Figure 6: Comparison of BERT large and Ensemble performance on an example" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "6-Table1-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Figure4-1.png", "7-Figure5-1.png", "8-Figure6-1.png" ] }
[ "How much F1 was improved after adding skip connections?" ]
[ [ "1912.10435-6-Table2-1.png", "1912.10435-Results and Analysis-0" ] ]
[ "Simple Skip improves F1 from 74.34 to 74.81\nTransformer Skip improes F1 from 74.34 to 74.95 " ]
402
1603.04513
Multichannel Variable-Size Convolution for Sentence Classification
We propose MVCNN, a convolution neural network (CNN) architecture for sentence classification. It (i) combines diverse versions of pretrained word embeddings and (ii) extracts features of multigranular phrases with variable-size convolution filters. We also show that pretraining MVCNN is critical for good performance. MVCNN achieves state-of-the-art performance on four tasks: on small-scale binary, small-scale multi-class and largescale Twitter sentiment prediction and on subjectivity classification.
{ "paragraphs": [ [ "Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.", "In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases.", "Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words.", "Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance.", "The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version.", "For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task.", "In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining.", "In remaining parts, Section \"Related Work\" presents related work. Section \"Model Description\" gives details of our classification model. Section \"Model Enhancements\" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section \"Experiments\" reports experimental results. Section \"Conclusion\" concludes this work." ], [ "Much prior work has exploited deep neural networks to model sentences.", "blacoe2012comparison represented a sentence by element-wise addition, multiplication, or recursive autoencoder over embeddings of component single words. yin2014exploration extended this approach by composing on words and phrases instead of only single words.", "collobert2008unified and yu2014deep used one layer of convolution over phrases detected by a sliding window on a target sentence, then used max- or average-pooling to form a sentence representation.", "blunsom2014convolutional stacked multiple layers of one-dimensional convolution by dynamic k-max pooling to model sentences. We also adopt dynamic k-max pooling while our convolution layer has variable-size filters.", "kimEMNLP2014 also studied multichannel representation and variable-size filters. Differently, their multichannel relies on a single version of pretrained embeddings (i.e., pretrained Word2Vec embeddings) with two copies: one is kept stable and the other one is fine-tuned by backpropagation. We develop this insight by incorporating diverse embedding versions. Additionally, their idea of variable-size filters is further developed.", "le2014distributed initialized the representation of a sentence as a parameter vector, treating it as a global feature and combining this vector with the representations of context words to do word prediction. Finally, this fine-tuned vector is used as representation of this sentence. Apparently, this method can only produce generic sentence representations which encode no task-specific features.", "Our work is also inspired by studies that compared the performance of different word embedding versions or investigated the combination of them. For example, turian2010word compared Brown clusters, C&W embeddings and HLBL embeddings in NER and chunking tasks. They found that Brown clusters and word embeddings both can improve the accuracy of supervised NLP systems; and demonstrated empirically that combining different word representations is beneficial. luo2014pre adapted CBOW BIBREF12 to train word embeddings on different datasets: free text documents from Wikipedia, search click-through data and user query data, showing that combining them gets stronger results than using individual word embeddings in web search ranking and word similarity task. However, these two papers either learned word representations on the same corpus BIBREF13 or enhanced the embedding quality by extending training corpora, not learning algorithms BIBREF17 . In our work, there is no limit to the type of embedding versions we can use and they leverage not only the diversity of corpora, but also the different principles of learning algorithms." ], [ "We now describe the architecture of our model MVCNN, illustrated in Figure 1 .", "Multichannel Input. The input of MVCNN includes multichannel feature maps of a considered sentence, each is a matrix initialized by a different embedding version. Let $s$ be sentence length, $d$ dimension of word embeddings and $c$ the total number of different embedding versions (i.e., channels). Hence, the whole initialized input is a three-dimensional array of size $c\\times d\\times s$ . Figure 1 depicts a sentence with $s=12$ words. Each word is initialized by $c=5$ embeddings, each coming from a different channel. In implementation, sentences in a mini-batch will be padded to the same length, and unknown words for corresponding channel are randomly initialized or can acquire good initialization from the mutual-learning phase described in next section.", "Multichannel initialization brings two advantages: 1) a frequent word can have $c$ representations in the beginning (instead of only one), which means it has more available information to leverage; 2) a rare word missed in some embedding versions can be “made up” by others (we call it “partially known word”). Therefore, this kind of initialization is able to make use of information about partially known words, without having to employ full random initialization or removal of unknown words. The vocabulary of the binary sentiment prediction task described in experimental part contains 5232 words unknown in HLBL embeddings, 4273 in Huang embeddings, 3299 in GloVe embeddings, 4136 in SENNA embeddings and 2257 in Word2Vec embeddings. But only 1824 words find no embedding from any channel! Hence, multichannel initialization can considerably reduce the number of unknown words.", "Convolution Layer (Conv). For convenience, we first introduce how this work uses a convolution layer on one input feature map to generate one higher-level feature map. Given a sentence of length $s$ : $w_1, w_2, \\ldots , w_s$ ; $\\mathbf {w}_i\\in \\mathbb {R}^{d}$ denotes the embedding of word $w_i$ ; a convolution layer uses sliding filters to extract local features of that sentence. The filter width $l$ is a parameter. We first concatenate the initialized embeddings of $l$ consecutive words ( $\\mathbf {w}_{i-l+1},\n\\ldots , \\mathbf {w}_i$ ) as $\\mathbf {c}_i\\in \\mathbb {R}^{ld}$ $(1\\le i <s+l)$ , then generate the feature value of this phrase as $\\textbf {p}_i$ (the whole vector $w_1, w_2, \\ldots , w_s$0 contains all the local features) using a tanh activation function and a linear projection vector $w_1, w_2, \\ldots , w_s$1 as: ", "$$\\mathbf {p}_i=\\mathrm {tanh}(\\mathbf {v}^\\mathrm {T}\\mathbf {c}_i+b)$$ (Eq. 2) ", "More generally, convolution operation can deal with multiple input feature maps and can be stacked to yield feature maps of increasing layers. In each layer, there are usually multiple filters of the same size, but with different weights BIBREF4 . We refer to a filter with a specific set of weights as a kernel. The goal is often to train a model in which different kernels detect different kinds of features of a local region. However, this traditional way can not detect the features of regions of different granularity. Hence we keep the property of multi-kernel while extending it to variable-size in the same layer.", "As in CNN for object recognition, to increase the number of kernels of a certain layer, multiple feature maps may be computed in parallel at the same layer. Further, to increase the size diversity of kernels in the same layer, more feature maps containing various-range dependency features can be learned. We denote a feature map of the $i^{\\mathrm {th}}$ layer by $\\mathbf {F}_i$ , and assume totally $n$ feature maps exist in layer $i-1$ : $\\mathbf {F}_{i-1}^1, \\ldots ,\n\\mathbf {F}_{i-1}^n$ . Considering a specific filter size $l$ in layer $i$ , each feature map $\\mathbf {F}_{i,l}^j$ is computed by convolving a distinct set of filters of size $l$ , arranged in a matrix $\\mathbf {V}_{i,l}^{j,k}$ , with each feature map $\\mathbf {F}_i$0 and summing the results: ", "$$\\mathbf {F}_{i,l}^j=\\sum ^n_{k=1}\\mathbf {V}_{i,l}^{j,k}*\\mathbf {F}^k_{i-1}$$ (Eq. 3) ", "where $*$ indicates the convolution operation and $j$ is the index of a feature map in layer $i$ . The weights in $\\mathbf {V}$ form a rank 4 tensor.", "Note that we use wide convolution in this work: it means word representations $\\mathbf {w}_g$ for $g\\le 0$ or $g\\ge s+1$ are actually zero embeddings. Wide convolution enables that each word can be detected by all filter weights in $\\mathbf {V}$ .", "In Figure 1 , the first convolution layer deals with an input with $n=5$ feature maps. Its filters have sizes 3 and 5 respectively (i.e., $l=3, 5$ ), and each filter has $j=3$ kernels. This means this convolution layer can detect three kinds of features of phrases with length 3 and 5, respectively.", "DCNN in BIBREF4 used one-dimensional convolution: each higher-order feature is produced from values of a single dimension in the lower-layer feature map. Even though that work proposed folding operation to model the dependencies between adjacent dimensions, this type of dependency modeling is still limited. Differently, convolution in present work is able to model dependency across dimensions as well as adjacent words, which obviates the need for a folding step. This change also means our model has substantially fewer parameters than the DCNN since the output of each convolution layer is smaller by a factor of $d$ .", "Dynamic k-max Pooling. blunsom2014convolutional pool the $k$ most active features compared with simple max (1-max) pooling BIBREF2 . This property enables it to connect multiple convolution layers to form a deep architecture to extract high-level abstract features. In this work, we directly use it to extract features for variable-size feature maps. For a given feature map in layer $i$ , dynamic k-max pooling extracts $k_{i}$ top values from each dimension and $k_{top}$ top values in the top layer. We set ", "$$\\nonumber k_{i}=\\mathrm {max}(k_{top}, \\lceil \\frac{L-i}{L}s\\rceil )$$ (Eq. 5) ", "where $i\\in \\lbrace 1,2,\\ldots \\, L\\rbrace $ is the order of convolution layer from bottom to top in Figure 1 ; $L$ is the total numbers of convolution layers; $k_{top}$ is a constant determined empirically, we set it to 4 as BIBREF4 .", "As a result, the second convolution layer in Figure 1 has an input with two same-size feature maps, one results from filter size 3, one from filter size 5. The values in the two feature maps are for phrases with different granularity. The motivation of this convolution layer lies in that a feature reflected by a short phrase may be not trustworthy while the longer phrase containing the short one is trustworthy, or the long phrase has no trustworthy feature while its component short phrase is more reliable. This and even higher-order convolution layers therefore can make a trade-off between the features of different granularity.", "Hidden Layer. On the top of the final k-max pooling, we stack a fully connected layer to learn sentence representation with given dimension (e.g., $d$ ).", "Logistic Regression Layer. Finally, sentence representation is forwarded into logistic regression layer for classification.", "In brief, our MVCNN model learns from BIBREF4 to use dynamic k-max pooling to stack multiple convolution layers, and gets insight from BIBREF5 to investigate variable-size filters in a convolution layer. Compared to BIBREF4 , MVCNN has rich feature maps as input and as output of each convolution layer. Its convolution operation is not only more flexible to extract features of variable-range phrases, but also able to model dependency among all dimensions of representations. MVCNN extends the network in BIBREF5 by hierarchical convolution architecture and further exploration of multichannel and variable-size feature detectors." ], [ "This part introduces two training tricks that enhance the performance of MVCNN in practice.", "Mutual-Learning of Embedding Versions. One observation in using multiple embedding versions is that they have different vocabulary coverage. An unknown word in an embedding version may be a known word in another version. Thus, there exists a proportion of words that can only be partially initialized by certain versions of word embeddings, which means these words lack the description from other versions.", "To alleviate this problem, we design a mutual-learning regime to predict representations of unknown words for each embedding version by learning projections between versions. As a result, all embedding versions have the same vocabulary. This processing ensures that more words in each embedding version receive a good representation, and is expected to give most words occurring in a classification dataset more comprehensive initialization (as opposed to just being randomly initialized).", "Let $c$ be the number of embedding versions in consideration, $V_1, V_2, \\ldots , V_i, \\ldots , V_c$ their vocabularies, $V^*=\\cup ^c_{i=1} V_i$ their union, and $V_i^-=V^*\\backslash V_i$ ( $i=1, \\ldots , c$ ) the vocabulary of unknown words for embedding version $i$ . Our goal is to learn embeddings for the words in $V_i^-$ by knowledge from the other $c-1$ embedding versions.", "We use the overlapping vocabulary between $V_i$ and $V_j$ , denoted as $V_{ij}$ , as training set, formalizing a projection $f_{ij}$ from space $V_i$ to space $V_j$ ( $i\\ne j; i,\nj\\in \\lbrace 1,2,\\ldots ,c\\rbrace $ ) as follows: ", "$$\\mathbf {\\hat{w}}_j=\\mathbf {M}_{ij}\\mathbf {w}_i$$ (Eq. 6) ", "where $\\mathbf {M}_{ij}\\in \\mathbb {R}^{d\\times d}$ , $\\mathbf {w}_i\\in \\mathbb {R}^d$ denotes the representation of word $w$ in space $V_i$ and $\\mathbf {\\hat{w}}_j$ is the projected (or learned) representation of word $w$ in space $V_j$ . Squared error between $\\mathbf {w}_j$ and $\\mathbf {\\hat{w}}_j$ is the training loss to minimize. We use $\\hat{\\mathbf {}{w}_j=f_{ij}(\\mathbf {w}_i) to reformat\nEquation \\ref {equ:proj}.\nTotally c(c-1)/2 projections f_{ij} are trained, each on the\nvocabulary intersection V_{ij}.\n}Let $ w $\\mathbf {w}_i\\in \\mathbb {R}^d$0 Vi $\\mathbf {w}_i\\in \\mathbb {R}^d$1 V1, V2, ..., Vk $\\mathbf {w}_i\\in \\mathbb {R}^d$2 w $\\mathbf {w}_i\\in \\mathbb {R}^d$3 Vi $\\mathbf {w}_i\\in \\mathbb {R}^d$4 k $\\mathbf {w}_i\\in \\mathbb {R}^d$5 f1i(w1) $\\mathbf {w}_i\\in \\mathbb {R}^d$6 f2i(w2) $\\mathbf {w}_i\\in \\mathbb {R}^d$7 ... $\\mathbf {w}_i\\in \\mathbb {R}^d$8 fki(wk) $\\mathbf {w}_i\\in \\mathbb {R}^d$9 V1, V2, ..., Vk $w$0 Vi $w$1 f1i(w1) $w$2 f2i(w2) $w$3 ... $w$4 fki(wk) $w$5 w $w$6 Vi $w$7 w $w$8 Vi $w$9 ", "As discussed in Section \"Model Description\" , we found that for the binary sentiment classification dataset, many words were unknown in at least one embedding version. But of these words, a total of 5022 words did have coverage in another embedding version and so will benefit from mutual-learning. In the experiments, we will show that this is a very effective method to learn representations for unknown words that increases system performance if learned representations are used for initialization.", "Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.", "Figure 1 shows our pretraining setup. The “sentence representation” – the output of “Fully connected” hidden layer – is used to predict the component words (“on” in the figure) in the sentence (instead of predicting the sentence label Y/N as in supervised learning). Concretely, the sentence representation is averaged with representations of some surrounding words (“the”, “cat”, “sat”, “the”, “mat”, “,” in the figure) to predict the middle word (“on”).", "Given sentence representation $\\mathbf {s}\\in \\mathbb {R}^d$ and initialized representations of $2t$ context words ( $t$ left words and $t$ right words): $\\mathbf {w}_{i-t}$ , $\\ldots $ , $\\mathbf {w}_{i-1}$ , $\\mathbf {w}_{i+1}$ , $\\ldots $ , $\\mathbf {w}_{i+t}$ ; $2t$0 , we average the total $2t$1 vectors element-wise, depicted as “Average” operation in Figure 1 . Then, this resulting vector is treated as a predicted representation of the middle word and is used to find the true middle word by means of noise-contrastive estimation (NCE) BIBREF18 . For each true example, 10 noise words are sampled.", "Note that in pretraining, there are three places where each word needs initialization. (i) Each word in the sentence is initialized in the “Multichannel input” layer to the whole network. (ii) Each context word is initialized as input to the average layer (“Average” in the figure). (iii) Each target word is initialized as the output of the “NCE” layer (“on” in the figure). In this work, we use multichannel initialization for case (i) and random initialization for cases (ii) and (iii). Only fine-tuned multichannel representations (case (i)) are kept for subsequent supervised training.", "The rationale for this pretraining is similar to auto-encoder: for an object composed of smaller-granular elements, the representations of the whole object and its components can learn each other. The CNN architecture learns sentence features layer by layer, then those features are justified by all constituent words.", "During pretraining, all the model parameters, including mutichannel input, convolution parameters and fully connected layer, will be updated until they are mature to extract the sentence features. Subsequently, the same sets of parameters will be fine-tuned for supervised classification tasks.", "In sum, this pretraining is designed to produce good initial values for both model parameters and word embeddings. It is especially helpful for pretraining the embeddings of unknown words." ], [ "We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments." ], [ "In each of the experiments, the top of the network is a logistic regression that predicts the probability distribution over classes given the input sentence. The network is trained to minimize cross-entropy of predicted and true distributions; the objective includes an $L_2$ regularization term over the parameters. The set of parameters comprises the word embeddings, all filter weights and the weights in fully connected layers. A dropout operation BIBREF19 is put before the logistic regression layer. The network is trained by back-propagation in mini-batches and the gradient-based optimization is performed using the AdaGrad update rule BIBREF20 ", "In all data sets, the initial learning rate is 0.01, dropout probability is 0.8, $L_2$ weight is $5\\cdot 10^{-3}$ , batch size is 50. In each convolution layer, filter sizes are {3, 5, 7, 9} and each filter has five kernels (independent of filter size)." ], [ "Standard Sentiment Treebank BIBREF21 . This small-scale dataset includes two tasks predicting the sentiment of movie reviews. The output variable is binary in one experiment and can have five possible outcomes in the other: {negative, somewhat negative, neutral, somewhat positive, positive}. In the binary case, we use the given split of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 split. socher2013recursive used the Stanford Parser BIBREF22 to parse each sentence into subphrases. The subphrases were then labeled by human annotators in the same way as the sentences were labeled. Labeled phrases that occur as subparts of the training sentences are treated as independent training instances as in BIBREF23 , BIBREF4 .", "Sentiment140 BIBREF24 . This is a large-scale dataset of tweets about sentiment classification, where a tweet is automatically labeled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally as follows. 1) The equivalence class symbol “url” (resp. “username”) replaces all URLs (resp. all words that start with the @ symbol, e.g., @thomasss). 2) A sequence of $k>2$ repetitions of a letter $c$ (e.g., “cooooooool”) is replaced by two occurrences of $c$ (e.g., “cool”). 3) All tokens are lowercased.", "Subj. Subjectivity classification dataset released by BIBREF25 has 5000 subjective sentences and 5000 objective sentences. We report the result of 10-fold cross validation as baseline systems did.", "In this work, we use five embedding versions, as shown in Table 1 , to initialize words. Four of them are directly downloaded from the Internet. (i) HLBL. Hierarchical log-bilinear model presented by mnih2009scalable and released by turian2010word; size: 246,122 word embeddings; training corpus: RCV1 corpus, one year of Reuters English newswire from August 1996 to August 1997. (ii) Huang. huang2012improving incorporated global context to deal with challenges raised by words with multiple meanings; size: 100,232 word embeddings; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe. Size: 1,193,514 word embeddings; training corpus: a Twitter corpus of 2B tweets with 27B tokens. (iv) SENNA. Size: 130,000 word embeddings; training corpus: Wikipedia. Note that we use their 50-dimensional embeddings. (v) Word2Vec. It has no 50-dimensional embeddings available online. We use released code to train skip-gram on English Gigaword Corpus BIBREF26 with setup: window size 5, negative sampling, sampling rate $10^{-3}$ , threads 12. It is worth emphasizing that above embeddings sets are derived on different corpora with different algorithms. This is the very property that we want to make use of to promote the system performance.", "Table 2 shows the number of unknown words in each task when using corresponding embedding version to initialize (rows “HLBL”, “Huang”, “Glove”, “SENNA”, “W2V”) and the number of words fully initialized by five embedding versions (“Full hit” row), the number of words partially initialized (“Partial hit” row) and the number of words that cannot be initialized by any of the embedding versions (“No hit” row).", "About 30% of words in each task have partially initialized embeddings and our mutual-learning is able to initialize the missing embeddings through projections. Pretraining is expected to learn good representations for all words, but pretraining is especially important for words without initialization (“no hit”); a particularly clear example for this is the Senti140 task: 236,484 of 387,877 words or 61% are in the “no hit” category.", "Table 3 compares results on test of MVCNN and its variants with other baselines in the four sentence classification tasks. Row 34, “MVCNN (overall)”, shows performance of the best configuration of MVCNN, optimized on dev. This version uses five versions of word embeddings, four filter sizes (3, 5, 7, 9), both mutual-learning and pretraining, three convolution layers for Senti140 task and two convolution layers for the other tasks. Overall, our system gets the best results, beating all baselines.", "The table contains five blocks from top to bottom. Each block investigates one specific configurational aspect of the system. All results in the five blocks are with respect to row 34, “MVCNN (overall)”; e.g., row 19 shows what happens when HLBL is removed from row 34, row 28 shows what happens when mutual learning is removed from row 34 etc.", "The block “baselines” (1–18) lists some systems representative of previous work on the corresponding datasets, including the state-of-the-art systems (marked as italic). The block “versions” (19–23) shows the results of our system when one of the embedding versions was not used during training. We want to explore to what extend different embedding versions contribute to performance. The block “filters” (24–27) gives the results when individual filter width is discarded. It also tells us how much a filter with specific size influences. The block “tricks” (28–29) shows the system performance when no mutual-learning or no pretraining is used. The block “layers” (30–33) demonstrates how the system performs when it has different numbers of convolution layers.", "From the “layers” block, we can see that our system performs best with two layers of convolution in Standard Sentiment Treebank and Subjectivity Classification tasks (row 31), but with three layers of convolution in Sentiment140 (row 32). This is probably due to Sentiment140 being a much larger dataset; in such a case deeper neural networks are beneficial.", "The block “tricks” demonstrates the effect of mutual-learning and pretraining. Apparently, pretraining has a bigger impact on performance than mutual-learning. We speculate that it is because pretraining can influence more words and all learned word embeddings are tuned on the dataset after pretraining.", "The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).", "In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks." ], [ "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks." ], [ "As pointed out by the reviewers the success of the multichannel approach is likely due to a combination of several quite different effects.", "First, there is the effect of the embedding learning algorithm. These algorithms differ in many aspects, including in sensitivity to word order (e.g., SENNA: yes, word2vec: no), in objective function and in their treatment of ambiguity (explicitly modeled only by huang2012improving.", "Second, there is the effect of the corpus. We would expect the size and genre of the corpus to have a big effect even though we did not analyze this effect in this paper.", "Third, complementarity of word embeddings is likely to be more useful for some tasks than for others. Sentiment is a good application for complementary word embeddings because solving this task requires drawing on heterogeneous sources of information, including syntax, semantics and genre as well as the core polarity of a word. Other tasks like part of speech (POS) tagging may benefit less from heterogeneity since the benefit of embeddings in POS often comes down to making a correct choice between two alternatives – a single embedding version may be sufficient for this.", "We plan to pursue these questions in future work." ], [ "Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335)." ] ], "section_name": [ "Introduction", "Related Work", "Model Description", "Model Enhancements", "Experiments", "Hyperparameters and Training", "Datasets and Experimental Setup", "Conclusion", "Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "5388e251489e27615d6b54e9b7771fff278d4b37" ], "answer": [ { "evidence": [ "Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems." ], "extractive_spans": [ "on the unlabeled data of each task" ], "free_form_answer": "", "highlighted_evidence": [ "In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "35ba7b3ebf2ad9740af5ecf03e8046916f5ceba1" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "extractive_spans": [], "free_form_answer": "0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "9b253a1f26aaf983aca556df025083a4a2fa4ab9" ] }, { "annotation_id": [ "41cb9091fb9c2ab8c2ca98fde8471e4ae37a2197" ], "answer": [ { "evidence": [ "The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).", "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "extractive_spans": [ "The system benefits from filters of each size.", "features of multigranular phrases are extracted with variable-size convolution filters." ], "free_form_answer": "", "highlighted_evidence": [ "The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).", "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. ", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "2caaebbf87e40141b351d156c182b4319afb46bf" ], "answer": [ { "evidence": [ "In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "extractive_spans": [ "each embedding version is crucial for good performance" ], "free_form_answer": "", "highlighted_evidence": [ "In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. ", "FLOAT SELECTED: Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "db18105d7454dd2a5b40396063670c89678f917a" ], "answer": [ { "evidence": [ "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks." ], "extractive_spans": [ "MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. " ], "free_form_answer": "", "highlighted_evidence": [ "This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "Where is MVCNN pertained?", "How much gain does the model achieve with pretraining MVCNN?", "What are the effects of extracting features of multigranular phrases?", "What are the effects of diverse versions of pertained word embeddings? ", "How is MVCNN compared to CNN?" ], "question_id": [ "67a28fe78f07c1383176b89e78630ee191cf15db", "d8de12f5eff64d0e9c9e88f6ebdabc4cdf042c22", "9cba2ee1f8e1560e48b3099d0d8cf6c854ddea2e", "7975c3e1f61344e3da3b38bb12e1ac6dcb153a18", "eddb18109495976123e10f9c6946a256a55074bd" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "research", "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1: MVCNN: supervised classification and pretraining.", "Table 1: Description of five versions of word embedding.", "Table 2: Statistics of five embedding versions for four tasks. The first block with five rows provides the number of unknown words of each task when using corresponding version to initialize. Voc size: vocabulary size. Full hit: embedding in all 5 versions. Partial hit: embedding in 1–4 versions, No hit: not present in any of the 5 versions.", "Table 3: Test set results of our CNN model against other methods. RAE: Recursive Autoencoders with pretrained word embeddings from Wikipedia (Socher et al., 2011b). MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013). DCNN, MAX-TDNN, NBOW: Dynamic Convolution Neural Network with k-max pooling, Time-Delay Neural Networks with Max-pooling (Collobert and Weston, 2008), Neural Bag-of-Words Models (Kalchbrenner et al., 2014). Paragraph-Vec: Logistic regression on top of paragraph vectors (Le and Mikolov, 2014). SVM, BINB, MAXENT: Support Vector Machines, Naive Bayes with unigram features and bigram features, Maximum Entropy (Go et al., 2009). NBSVM, MNB: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning (2012). CNN-rand/static/multichannel/nonstatic: CNN with word embeddings randomly initialized / initialized by pretrained vectors and kept static during training / initialized with two copies (each is a “channel”) of pretrained embeddings / initialized with pretrained embeddings while fine-tuned during training (Kim, 2014). G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning (2013). Minus sign “-” in MVCNN (-Huang) etc. means “Huang” is not used. “versions / filters / tricks / layers” denote the MVCNN variants with different setups: discard certain embedding version / discard certain filter size / discard mutual-learning or pretraining / different numbers of convolution layer." ], "file": [ "3-Figure1-1.png", "7-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png" ] }
[ "How much gain does the model achieve with pretraining MVCNN?" ]
[ [ "1603.04513-8-Table3-1.png" ] ]
[ "0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj" ]
404
1607.06025
Constructing a Natural Language Inference Dataset using Generative Neural Networks
Natural Language Inference is an important task for Natural Language Understanding. It is concerned with classifying the logical relation between two sentences. In this paper, we propose several text generative neural networks for generating text hypothesis, which allows construction of new Natural Language Inference datasets. To evaluate the models, we propose a new metric -- the accuracy of the classifier trained on the generated dataset. The accuracy obtained by our best generative model is only 2.7% lower than the accuracy of the classifier trained on the original, human crafted dataset. Furthermore, the best generated dataset combined with the original dataset achieves the highest accuracy. The best model learns a mapping embedding for each training example. By comparing various metrics we show that datasets that obtain higher ROUGE or METEOR scores do not necessarily yield higher classification accuracies. We also provide analysis of what are the characteristics of a good dataset including the distinguishability of the generated datasets from the original one.
{ "paragraphs": [ [ "The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requires various natural language comprehension skills. In this paper, we are focused on the following natural language generation task based on NLI. Given the premise the goal is to generate a stream of hypotheses that comply with the label (entailment, contradiction or neutral). In addition to reading capabilities this task also requires language generation capabilities.", "The Stanford Natural Language Inference (SNLI) Corpus BIBREF0 is a NLI dataset that contains over a half a million examples. The size of the dataset is sufficient to train powerful neural networks. Several successful classification neural networks have already been proposed BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . In this paper, we utilize SNLI to train generative neural networks. Each example in the dataset consist of two human-written sentences, a premise and a hypothesis, and a corresponding label that describes the relationship between them. Few examples are presented in Table TABREF1 .", "The proposed generative networks are trained to generate a hypothesis given a premise and a label, which allow us to construct new, unseen examples. Some generative models are build to generate a single optimal response given the input. Such models have been applied to machine translation BIBREF5 , image caption generation BIBREF6 , or dialogue systems BIBREF7 . Another type of generative models are autoencoders that generate a stream of random samples from the original distribution. For instance, autoencoders have been used to generate text BIBREF8 , BIBREF9 , and images BIBREF10 . In our setting we combine both approaches to generate a stream of random responses (hypotheses) that comply with the input (premise, label).", "But what is a good stream of hypotheses? We argue that a good stream contains diverse, comprehensible, accurate and non-trivial hypotheses. A hypothesis is comprehensible if it is grammatical and semantically makes sense. It is accurate if it clearly expresses the relationship (signified by the label) with the premise. Finally, it is non-trivial if it is not trivial to determine the relationship (label) between the hypothesis and premise. For instance, given a premise ”A man drives a red car” and label entailment, the hypothesis ”A man drives a car” is more trivial than ”A person is sitting in a red vehicle”.", "The next question is how to automatically measure the quality of generated hypotheses. One way is to use metrics that are standard in text generation tasks, for instance ROUGE BIBREF11 , BLEU BIBREF12 , METEOR BIBREF13 . These metrics estimate the similarity between the generated text and the original reference text. In our task they can be used by comparing the generated and reference hypotheses with the same premise and label. The main issue of these metrics is that they penalize the diversity since they penalize the generated hypotheses that are dissimilar to the reference hypothesis. An alternative metric is to use a NLI classifier to test the generated hypothesis if the input label is correct in respect to the premise. A perfect classifier would not penalize diverse hypotheses and would reward accurate and (arguably to some degree) comprehensible hypotheses. However, it would not reward non-trivial hypotheses.", "Non-trivial examples are essential in a dataset for training a capable machine learning model. Furthermore, we make the following hypothesis.", "A good dataset for training a NLI classifier consists of a variety of accurate, non-trivial and comprehensible examples.", "Based on this hypothesis, we propose the following approach for evaluation of generative models, which is also presented in Figure FIGREF2 . First, the generative model is trained on the original training dataset. Then, the premise and label from an example in the original dataset are taken as the input to the generative model to generate a new random hypothesis. The generated hypothesis is combined with the premise and the label to form a new unseen example. This is done for every example in the original dataset to construct a new dataset. Next, a classifier is trained on the new dataset. Finally, the classifier is evaluated on the original test set. The accuracy of the classifier is the proposed quality metric for the generative model. It can be compared to the accuracy of the classifier trained on the original training set and tested on the original test set.", "The generative models learn solely from the original training set to regenerate the dataset. Thus, the model learns the distribution of the original dataset. Furthermore, the generated dataset is just a random sample from the estimated distribution. To determine how well did the generative model learn the distribution, we observe how close does the accuracy of the classifier trained on the generated dataset approach the accuracy of classifier trained on the original dataset.", "Our flagship generative network EmbedDecoder works in a similar fashion as the encoder-decoder networks, where the encoder is used to transform the input into a low-dimensional latent representation, from which the decoder reconstructs the input. The difference is that EmbedDecoder consists only of the decoder, and the latent representation is learned as an embedding for each training example separately. In our models, the latent representation represents the mapping between the premise and the label on one side and the hypothesis on the other side.", "Our main contributions are i) a novel generative neural network, which consist of the decoder that learns a mapping embedding for each training example separately, ii) a procedure for generating NLI datasets automatically, iii) and a novel evaluation metric for NLI generative models – the accuracy of the classifier trained on the generated dataset.", "In Section SECREF2 we present the related work. In Section SECREF3 the considered neural networks are presented. Besides the main generative networks, we also present classification and discriminative networks, which are used for evaluation. The results are presented in Section SECREF5 , where the generative models are evaluated and compared. From the experiments we can see that the best dataset was generated by the attention-based model EmbedDecoder. The classifier on this dataset achieved accuracy of INLINEFORM0 , which is INLINEFORM1 less than the accuracy achieved on the original dataset. We also investigate the influence of latent dimensionality on the performance, compare different evaluation metrics, and provide deeper insights of the generated datasets. The conclusion is presented in Section SECREF6 ." ], [ "NLI has been the focal point of Recognizing Textual Entailment (RTE) Challenges, where the goal is to determine if the premise entails the hypothesis or not. The proposed approaches for RTE include bag-of-words matching approach BIBREF14 , matching predicate argument structure approach BIBREF15 and logical inference approach BIBREF16 , BIBREF17 . Another rule-based inference approach was proposed by BIBREF18 . This approach allows generation of new hypotheses by transforming parse trees of the premise while maintaining entailment. BIBREF19 proposes an approach for constructing training datasets by extracting sentences from news articles that tend to be in an entailment relationship.", "After SNLI dataset was released several neural network approaches for NLI classification have emerged. BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The state-of-the-art model BIBREF4 achieves INLINEFORM0 accuracy on the SNLI dataset. A similar generation approach to ours was proposed by BIBREF20 , The goal of this work is generating entailment inference chains, where only examples with entailment label are used.", "Natural Lanuguage Generation (NLG) is a task of generating natural language from a structured form such as knowledge base or logic form BIBREF21 , BIBREF22 , BIBREF23 . The input in our task is unstructured text (premise) and label. On the other side of this spectrum, there are tasks that deal solely with unstructured text, like machine translation BIBREF24 , BIBREF25 , BIBREF26 , summarization BIBREF27 , BIBREF28 and conversational dialogue systems BIBREF7 , BIBREF29 . Another recently popular task is generating captions from images BIBREF30 , BIBREF31 .", "With the advancement of deep learning, many neural network approaches have been introduced for generating sequences. The Recurrent Neural Network Language Model (RNNLM) BIBREF32 is one of the simplest neural architectures for generating text. The approach was extended by BIBREF5 , which use encoder-decoder architecture to generate a sequence from the input sequence. The Hierarchical Recurrent Encoder-Decoder (HRED) architecture BIBREF7 generates sequences from several input sequences. These models offer very little variety of output sequences. It is obtained by modeling the output distribution of the language model. To introduce more variety, models based on variational autoencoder (VAE) BIBREF33 have been proposed. These models use stochastic random variables as a source of variety. In BIBREF8 a latent variable is used to initial the RNN that generates sentences, while the variational recurrent neural network (VRNN) BIBREF34 models the dependencies between latent variables across subsequent steps of RNN. The Latent Variable Hierarchical Recurrent Encoder-Decoder (VHRED) BIBREF35 extends the HRED by incorporating latent variables, which are learned similarly than in VAE. The latent variables are, like in some of our models, used to represent the mappings between sequences. Conditional variational autoencoders (CVAEs) BIBREF36 were used to generate images from continuous visual attributes. These attributes are conditional information that is fed to the models, like the discrete label is in our models.", "As recognized by BIBREF37 , the evaluation metrics of text-generating models fall into three categories: manual evaluation, automatic evaluation metrics, task-based evaluation. In evaluation based on human judgment each generated textual example is inspected manually. The automatic evaluation metrics, like ROUGE, BLEU and METEOR, compare human texts and generated texts. BIBREF38 shows METEOR has the strongest correlation with human judgments in image description evaluation. The last category is task-based evaluation, where the impact of the generated texts on a particular task is measured. This type of evaluation usually involves costly and lengthy human involvement, like measuring the effectiveness of smoking-cessation letters BIBREF39 . On the other hand, the task in our evaluation, the NLI classification, is automatic. In BIBREF40 ranking was used as an automatic task-based evaluation for associating images with captions." ], [ "In this section, we present several neural networks used in the experiments. We start with variants of Recurrent Neural Networks, which are essential layers in all our models. Then, we present classification networks, which are needed in evaluation of generative neural networks presented in the following section. Next, we present how to use generative networks to generate hypothesis. Finally, we present discriminative networks, which are used for evaluation and analysis of the hypotheses.", "The premise INLINEFORM0 and hypothesis INLINEFORM1 are represented with word embeddings INLINEFORM2 and INLINEFORM3 respectively. Each INLINEFORM4 is a INLINEFORM5 -dimensional vector that represents the corresponding word, INLINEFORM6 is the length of premise, and INLINEFORM7 is the length of hypothesis. The labels (entailment, contradiction, neutral) are represented by a 3-dimensional vector INLINEFORM8 if the label is the output of the model, or INLINEFORM9 if the label is the input to the model." ], [ "The Recurrent Neural Networks (RNNs) are neural networks suitable for processing sequences. They are the basic building block in all our networks. We use two variants of RNNs – Long short term memory (LSTM) network BIBREF41 and an attention-based extension of LSTM, the mLSTM BIBREF2 . The LSTM tends to learn long-term dependencies better than vanilla RNNs. The input to the LSTM is a sequence of vectors INLINEFORM0 , and the output is a sequence of vectors INLINEFORM1 . At each time point INLINEFORM2 , input gate INLINEFORM3 , forget gate INLINEFORM4 , output gate INLINEFORM5 , cell state INLINEFORM6 and one output vector INLINEFORM7 are calculated. DISPLAYFORM0 ", "where INLINEFORM0 is a sigmoid function, INLINEFORM1 is the element-wise multiplication operator, INLINEFORM2 and INLINEFORM3 are parameter matrices, INLINEFORM4 parameter vectors, INLINEFORM5 is the input vector dimension, and INLINEFORM6 is the output vector dimension. The vectors INLINEFORM7 and INLINEFORM8 are set to zero in the standard setting, however, in some cases in our models, they are set to a value that is the result of previous layers.", "The mLSTM is an attention-based model with two input sequences – premise and hypothesis in case of NLI. Each word of the premise is matched against each word of the hypothesis to find the soft alignment between the sentences. The mLSTM is based on LSTM in such a way that it remembers the important matches and forgets the less important. The input to the LSTM inside the mLSTM at each time step is INLINEFORM0 , where INLINEFORM1 is an attention vector that represents the weighted sum of premise sequence, where the weights present the degree to which each token of the premise is aligned with the INLINEFORM2 -th token of the hypothesis INLINEFORM3 , and INLINEFORM4 is the concatenation operator. More details about mLSTM are presented in BIBREF2 ." ], [ "The classification model predicts the label of the example given the premise and the hypothesis. We use the mLSTM-based model proposed by BIBREF2 .", "The architecture of the model is presented in Figure FIGREF9 . The embeddings of the premise INLINEFORM0 and hypothesis INLINEFORM1 are the input to the first two LSTMs to obtain the hidden states of the premise INLINEFORM2 and hypothesis INLINEFORM3 . DISPLAYFORM0 ", "All the hidden states in our models are INLINEFORM0 -dimensional unless otherwise noted. The hidden states INLINEFORM1 and INLINEFORM2 are the input to the mLSTM layer. The output of mLSTM are hidden states INLINEFORM3 , although only the last state INLINEFORM4 is further used. A fully connected layer transforms it into a 3-dimensional vector, on top of which softmax function is applied to obtain the probabilities INLINEFORM5 of labels. DISPLAYFORM0 ", "where INLINEFORM0 represents the fully connected layer, whose output size is INLINEFORM1 ." ], [ "The goal of the proposed generative models, is to generate a diverse stream of hypotheses given the premise and the label. In this section, we present four variants of generative models, two variants of EmbedDecoder model presented in Figure FIGREF11 , and two variants of EncoderDecoder model presented in Figure FIGREF11 .", "All models learn a latent representation INLINEFORM0 that represents the mapping between the premise and the label on one side, and the hypothesis on the other side. The EmbedDecoder models learn the latent representation by learning an embedding of the mapping for each training example separately. The embedding for INLINEFORM1 -th training example INLINEFORM2 is a INLINEFORM3 -dimensional trainable parameter vector. Consequentely, INLINEFORM4 is a parameter matrix of all embeddings, where INLINEFORM5 is the number of training examples. On the other hand, in EncoderDecoder models latent representation is the output of the decoder.", "The EmbedDecoder models are trained to predict the next word of the hypothesis given the previous words of hypothesis, the premise, the label, and the latent representation of the example. DISPLAYFORM0 ", "where INLINEFORM0 represent parameters other than INLINEFORM1 , and INLINEFORM2 is the length of the hypothesis INLINEFORM3 .", "The AttEmbedDecoder, presented in Figure FIGREF26 , is attention based variant of EmbedDecoder. The same mLSTM layer is used as in classification model. However, the initial cell state INLINEFORM0 of mLSTM is constructed from the latent vector and the label input. DISPLAYFORM0 ", "For the sake of simplifying the notation, we dropped the superscript INLINEFORM0 from the equations, except in INLINEFORM1 , where we explicitly want to state that the embedding vector is used.", "The premise and the hypothesis are first processed by LSTM and then fed into the mLSTM, like in the classification model, however here the hypothesis is shifted. The first word of the hypothesis input is an empty token INLINEFORM0 null INLINEFORM1 , symbolizing the empty input sequence when predicting the first word. The output of the mLSTM is a hidden state INLINEFORM2 , where each INLINEFORM3 represents an output word. To obtain the probabilities for all the words in the vocabulary INLINEFORM4 for the position INLINEFORM5 in the output sequence, INLINEFORM6 is first transformed into a vocabulary-sized vector, then the softmax function is applied. DISPLAYFORM0 ", "where V is the size of the vocabulary. But, due to the large size of the vocabulary, a two-level hierarchical softmax BIBREF42 was used instead of a regular softmax to reduce the number of parameters updated during each training step. DISPLAYFORM0 ", "In the training step, the last output word INLINEFORM0 is set to INLINEFORM1 null INLINEFORM2 , while in the generating step, it is ignored.", "In the EmbedDecoder model without attention, BaseEmbedDecoder, the mLSTM is replaced by a regular LSTM. The input to this LSTM is the shifted hypothesis. But, here the premise is provided through the initial cell state INLINEFORM0 . Specifically, last hidden state of the premise is merged with class input and the latent representation, then fed to the LSTM. DISPLAYFORM0 ", "In order to not lose information INLINEFORM0 was picked to be equal to sum of the sizes of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . Thus, INLINEFORM4 . Since the size of INLINEFORM5 is INLINEFORM6 , the output vectors of the LSTM are also the size of INLINEFORM7 .", "We also present two variants of EncoderDecoder models, a regular one BaseEncodeDecoder, and a regularized one VarEncoderDecoder, which is based on Variational Bayesian approach. As presented in Figure FIGREF11 , all the information (premise, hypothesis, label) is available to the encoder, whose output is the latent representation INLINEFORM0 . On the other hand, the decoder is provided with the same premise and label, but the hypothesis is shifted. This forces the encoder to learn to encode only the missing information – the mapping between premise-label pair and the hypothesis. The encoder has a similar structure as the classification model in Figure FIGREF9 . Except that the label is connected to the initial cell state of the mLSTM DISPLAYFORM0 ", "and the output of mLSTM INLINEFORM0 is transformed into latent representation INLINEFORM1 DISPLAYFORM0 ", "The decoder is the same as in EmbedDecoder.", "The VarEncoderDecoder models is based on Variational Autoencoder from BIBREF33 . Instead of using single points for latent representation as in all previous models, the latent representation in VarEncoderDecoder is presented as a continuous variable INLINEFORM0 . Thus, the mappings are presented as a soft elliptical regions in the latent space, instead of a single points, which forces the model to fill up the latent space BIBREF8 . Both INLINEFORM1 and INLINEFORM2 are calculated form the output of the encoder using two different fully connected layers. INLINEFORM3 ", "To sample from the distribution the reparametrization trick is applied DISPLAYFORM0 ", "When training, a single sample is generated per example to generate INLINEFORM0 .", "As in BIBREF33 , the following regularization term is added to the loss function DISPLAYFORM0 " ], [ "In the generation phase only decoder of a trained generative model is used. It generates a hypothesis given the premise, label, and a randomly selected latent vector INLINEFORM0 . A single word is generated in each step, and it becomes the hypothesis input in the next step. DISPLAYFORM0 ", "We also used beam search to optimize hypothesis generation. Similarly as in BIBREF5 , a small number of hypotheses are generated given a single input, then the best is selected. In INLINEFORM0 -beam search, in each time step INLINEFORM1 best partial hypotheses are expanded by all the words in the vocabulary producing INLINEFORM2 partial hypothesis. Out of these INLINEFORM3 best partial hypotheses are selected for the next step according to the joint probability of each partial hypothesis. Thus, when INLINEFORM4 is 1, the procedure is the same as the one presented in Eq EQREF24 . The generation ends when INLINEFORM5 null INLINEFORM6 symbol is encountered or maximum hypothesis length is reached. The random latent vector INLINEFORM10 is selected randomly from a normal distribution INLINEFORM11 , where INLINEFORM12 is the standard deviation of INLINEFORM13 ." ], [ "The discriminative model is used to measure the distinguishability between the original human written sentences and the generated ones. Higher error rate of the model means that the generative distribution is similar to the original distribution, which is one of the goals on the generative model. The model is based on Generative Adversarial Nets BIBREF10 , where in a single network the generative part tires to trick the discriminative part by generating images that are similar to the original images, and the discriminative part tries to distinguish between the original and generated images. Due to the discreteness of words (the output of our generative model) it is difficult to connect both the discriminative and generative part in a single differentiable network, thus we construct them separately. The generative models have already been defined in Section SECREF10 . Here we define the discriminative model.", "The discriminative model INLINEFORM0 takes sequence INLINEFORM1 and process it with LSTM and fully connected layer DISPLAYFORM0 ", "In the training step, one original sequence INLINEFORM0 and one generated sequence INLINEFORM1 are processed by the discriminative model. The optimization function maximizes the following objective DISPLAYFORM0 ", "In the testing step, the discriminative model predicts correctly if DISPLAYFORM0 " ], [ "To construct a new dataset, first a generative model is trained on the training set of the original dataset. Then, a new dataset is constructed by generating a new hypotheses with a generative model. The premises and labels from the examples of the original dataset are taken as an input for the generative model. The new hypotheses replace the training hypotheses in the new dataset.", "Next, the classifier, presented in Section SECREF6 , is trained on the generated dataset. The accuracy of the new classifier is the main metric for evaluating the quality of the generated dataset." ], [ "All the experiments are performed on the SNLI dataset. There are 549,367 examples in the dataset, divided into training, development and test set. Both the development and test set contain around 10.000 examples. Some examples are labeled with '-', which means there was not enough consensus on them. These examples are excluded. Also, to speed up the computation we excluded examples, which have the premise longer than 25 words, or the hypothesis longer than 15 words. There were still INLINEFORM0 remaining examples. Both premises and hypothesis were padded with INLINEFORM1 null INLINEFORM2 symbols (empty words), so that all premises consisted of 25 words, and all hypotheses consisted of 15 tokens.", "We use 50-dimensional word vectors trained with GloVe BIBREF43 . For words without pretrained embeddings, the embeddings are randomly selected from the normal distribution. Word embeddings are not updated during training.", "For optimization Adam method BIBREF44 was used with suggested hyperparameters.", "Classification models are trained until the loss on the validation set does not improve for three epochs. The model with best validation loss is retained.", "Generative models are trained for 20 epochs, since it turned out that none of the stopping criteria were useful. With each generative model a new dataset is created. The new dataset consists of training set, which is generated using examples from the original training set, and a development set, which is generated from the original development set. The beam size for beam search was set to 1. The details of the decision are presented in Section SECREF35 .", "Some datasets were constructed by filtering the generated datasets according to various thresholds. Thus, the generated datasets were constructed to contain enough examples, so that the filtered datasets had at least the number of examples as the original dataset. In the end, all the datasets were trimmed down to the size of the original dataset by selecting the samples sequentially from the beginning until the dataset had the right size. Also, the datasets were filtered so that each of the labels was represented equally. All the models, including classification and discriminative models, were trained with hidden dimension INLINEFORM0 set to 150, unless otherwise noted.", "Our implementation is accessible at http://github.com/jstarc/nli_generation. It is based on libraries Keras and Theano BIBREF45 ." ], [ "First, the classification model OrigClass was trained on the original dataset. This model was then used throughout the experiments for filtering the datasets, comparison, etc. Notice that we have assumed OrigClass to be ground truth for the purpose of our experiments. However, the accuracy of this model on the original test set was INLINEFORM0 , which is less than INLINEFORM1 , which was attained by mLSTM (d=150) model in BIBREF2 . Both models are very similar, including the experimental settings, however ours was trained and evaluated on a slightly smaller dataset." ], [ "Several AttEmbedDecoder models with various latent dimensions INLINEFORM0 were first trained and then used to generate new datasets. A couple of generated examples are presented in Table TABREF36 .", "Figure FIGREF37 shows the accuracies of the generated development datasets evaluated by the OrigClass. The maximum accuracy of INLINEFORM0 was achieved by EmbedDecoder (z=2), and the accuracy is decreasing with the number of dimensions in the latent variable. The analysis for each label shows that the accuracy of contradiction and neutral labels is quite stable, while the accuracy of the entailment examples drops significantly with latent dimensionality. One reason for this is that the hypothesis space of the entailment label is smaller than the spaces of other two labels. Thus, when the dimensionality is higher, more creative examples are generated, and these examples less often comply with the entailment label.", "Since none of the generated datasets' accuracies is as high as the accuracy of the OrigClass on the original test set, we used OrigClass to filter the datasets subject to various prediction thresholds. The examples from the generated dataset were classified by OrigClass and if the probability of the label of the example exceeded the threshold INLINEFORM0 , then the example was retained.", "For each filtered dataset a classifier was trained. Figure FIGREF38 shows the accuracies of these classifiers on the original test set. Filtering out the examples that have incorrect labels (according to the OrigClass) improves the accuracy of the classifier. However, if the threshold is set too high, the accuracy drops, since the dataset contains examples that are too trivial. Figure FIGREF38 , which represents the accuracy of classifiers on their corresponding generated development sets, further shows the trade-off between the accuracy and triviality of the examples. The classifiers trained on datasets with low latent dimension or high filtering threshold have higher accuracies. Notice that the training dataset and test dataset were generated by the same generative model.", "The unfiltered datasets have been evaluated with five other metrics besides classification accuracy. The results are presented in Figure FIGREF41 . The whole figure shows the effect of latent dimensionality of the models on different metrics. The main purpose of the figure is not show absolute values for each of the metrics, but to compare the metrics' curves to the curve of our main metric, the accuracy of the classifier.", "The first metric – Premise-Hypothesis Distance – represents the average Jaccard distance between the premise and the generated hypothesis. Datasets generated with low latent dimensions have hypotheses more similar to premises, which indicates that the generated hypotheses are more trivial and less diverse than hypothesis generated with higher latent dimensions.", "We also evaluated the models with standard language generation metrics ROUGE-L and METEOR. The metrics are negatively correlated with the accuracy of the classifier. We believe this is because the two metrics reward hypotheses that are similar to their reference (original) hypothesis. However, the classifier is better if trained on more diverse hypotheses.", "The next metric is the log-likelihood of hypotheses in the development set. This metric is the negative of the training loss function. The log-likelihood improves with dimensionality since it is easier to fit the hypotheses in the training step having more dimensions. Consequently, the hypothesis in the generating step are more confident – they have lower log-likelihood.", "The last metric – discriminative error rate – is calculated with the discriminative model. The model is trained on the hypotheses from the unfiltered generated dataset on one side and the original hypotheses on the other side. Error rate is calculated on the (generated and original) development sets. Higher error rate indicates that it is more difficult for discriminative model to distinguish between the generated and the original hypotheses, which suggests that the original generating distribution and the distribution of the generative model are more similar. The discriminative model detects that low dimensional generative models generate more trivial examples as also indicated by the distance between premise and hypotheses. On the other hand, it also detects the hypotheses of high dimensional models, which more frequently contain grammatic or semantic errors.", "There is a positive correlation between the discriminative error rate and the accuracy of the classifier. This observation led us to the experiment, where the generated dataset was filtered according to the prediction probability of the discriminative model. Two disjoint filtered datasets were created. One with hypotheses that had high probability that they come from the original distribution and the other one with low probability. However, the accuracies of classifiers trained on these datasets were very similar to the accuracy of the classifier on the unfiltered dataset. Similar test was also done with the log-likelihood metric. The examples with higher log-likelihood had similar performance than the ones with lower log-likelihood. This also lead us to set the size of the beam to 1. Also, the run time of generating hypothesis is INLINEFORM0 , where INLINEFORM1 is beam size. Thus, with lower beam sizes much more hypotheses can be generated.", "To accept the hypothesis from Section SECREF1 we have shown that a quality dataset requires accurate examples by showing that filtering the dataset with the original classifier improves the performance (Figure FIGREF38 ). Next, we have shown that non-trivial examples are also required. If the filtering threshold is set too high, these examples are excluded, and the accuracy drops. Also, the more trivial examples are produced by low-dimensional models, which is indicated by lower premise-hypothesis distances, and lower discriminative error rate (Figure FIGREF41 ). Finally, a quality dataset requires more comprehensible examples. The high dimensional models produce less comprehensible hypotheses. They are detected by the discriminative model (see discriminator error rate in Figure FIGREF41 )." ], [ "We also compared AttEmbedDecoder model to all other models. Table TABREF43 presents the results. For all the models the latent dimension INLINEFORM0 is set to 8, as it was previously shown to be one of the best dimensions.", "For all the models the number of total parameters is relatively high, however only a portion of parameters get updated each time. The AttEmbedDecoder model was the best model according to our main metric – the accuracy of the classifier trained on the generated dataset.", "The hidden dimension INLINEFORM0 of the BaseEmbedDecoder was selected so that the model was comparable to AttEmbedDecoder in terms of the number of parameters INLINEFORM1 . The accuracies of classifiers generated by BaseEmbedDecoder are still lower than the accuracies of classifiers generated by AttEmbedDecoder, which shows that the attention mechanism helps the models.", "Table TABREF44 shows the performance of generated datasets compared to the original one. The best generated dataset was generated by AttEmbedDecoder. The accuracy of its classifier is only 2.7 % lower than the accuracy of classifier generated on the original human crafted dataset. The comparison of the best generated dataset to the original dataset shows that the datasets had only INLINEFORM0 of identical examples. The average length of the hypothesis was INLINEFORM1 and INLINEFORM2 in the original dataset and in the generated dataset, respectively. In another experiment the generated dataset and the original dataset were merged to train a new classifier. Thus, the merged dataset contained twice as many examples as other datasets. The accuracy of this classifier was 82.0%, which is 0.8 % better than the classifier trained solely on the original training set. However, the lowest average loss is achieved by the classifier trained on the original dataset." ], [ "We also did a qualitative evaluation of the generated hypothesis. Hypotheses are mostly grammatically sound. Sometimes the models incorrectly use indefinite articles, for instance ”an phone”, or possessive pronouns ”a man uses her umbrella”. These may be due to the fact the system must learn the right indefinite article for every word separately. On the other hand, the models sometimes generate hypotheses that showcase more advanced grammatical patterns. For instance, hypothesis ”The man and woman have a cake for their family” shows that the model can correctly use plural in a non-trivial setting. Generative neural networks have a tendency to repeat words, which sometimes make sentences meaningless, like ”A cup is drinking from a cup of coffee” or even ungrammatical, like ”Several people in a car car”.", "As shown previously the larger is the latent dimension more creative hypotheses are generated. However, with more creativity semantic errors emerge. Some hypotheses are correct, just unlikely to be written by a human, like ”A shirtless man is holding a guitar with a woman and a woman”. Others present improbable events, like ”The girls were sitting in the park watching tv”, or even impossible events, for instance ”The child is waiting for his wife”. This type of errors arise because the models have not learned enough common sense logic. Finally, there are hypotheses, which make no sense. For instance, ”Two women with grassy beach has no tennis equipment”. On the contrary, the models are able to generate some non-trivial hypotheses. From the original premise ”A band performing with a girl singing and a guy next to her singing as well while playing the guitar”, the model has generated some hypotheses that do not contain concepts explicitly found in the premise. For instance, ”People are playing instruments” (entailment), ”The band was entirely silent” (contradiction), or ”The girl is playing at the concert” (neutral).", "Regarding the compliance of the hypotheses with the label and premise, we observed that many generated hypotheses are not complying with the label, however they would be a very good example with a different label. For instance, the generated hypotheses represent entailment instead of contradiction. This also explains why the accuracy of the generated dataset measured by the original classifier is low in Figure FIGREF37 . On the other hand, the models generate examples that are more ambiguous and not as clear as those in the original dataset. These examples are harder to classify even for a human. For instance, the relationship between premise ”A kid hitting a baseball in a baseball field” and hypothesis ”The baseball player is trying to get the ball” can be either interpreted either as an entailment if verb get is intepreted as not to miss or contradiction if get is intepreted as possess. For a deeper insight into generated hypothesis more examples are presented in SECREF7 .", "The gap between the discriminative error rates (disc-er) of EncoderDecoder models and EmbedDecoder models in Table TABREF43 is significant. To further investigate, the same experiment was performed again by a human evaluator and the discriminative model. This time on a sample of 200 examples. To recap, both the model and human were asked to select the generated hypothesis given a random original and generated hypothesis without knowing which one is which.", "Human evaluation confirms that AttEmbedDecoder hypotheses are more difficult to separate from the original one than the hypotheses of VaeEncoderDecoder. Table TABREF46 presents the results. The discriminative model discriminates better than the human evaluator. This may be due to the fact that the discriminative model has learned from a large training set, while the human was not shown any training examples. Human evaluation has shown that generated hypotheses are positively recognized if they contain a grammatical or semantic error. But even if the generated hypothesis does not contain these errors, it sometimes reveals itself by not being as sophisticated as the original example. On the other hand, the discriminative model does not always recognize these discrepancies. It relies more on the differences in distributions learned form a big training set. The true number of non-distinguishable examples may be even higher than indicated by the human discriminator error rate since the human may have correctly guessed some of the examples he could not distinguish." ], [ "In this paper, we have proposed several generative neural networks for generating hypothesis using NLI dataset. To evaluate these models we propose the accuracy of classifier trained on the generated dataset as the main metric. The best model achieved INLINEFORM0 accuracy, which is only INLINEFORM1 less than the accuracy of the classifier trained on the original human written dataset, while the best dataset combined with the original dataset has achieved the highest accuracy. This model learns a decoder and a mapping embedding for each training example. It outperforms the more standard encoder-decoder networks. Although more parameters are needed to be trained, less are updated on each batch. We have also shown that the attention mechanism improves the model. The analysis has confirmed our hypothesis that a good dataset contains accurate, non-trivial and comprehensible examples. To further examine the quality of generated hypothesis, they were compared against the original human written hypotheses. The discriminative evaluation shows that in INLINEFORM2 of cases the human evaluator incorrectly distinguished between the original and the generated hypothesis. The discriminative model was actually better in distinguishing. We have also compared the accuracy of classifier to other metrics. The standard text generation metrics ROUGE and METEOR do not indicate if a generated dataset is good for training a classifier.", "To obtain higher accuracies of the generated datasets, they need to be filtered, because the generative models produce examples, whose label is not always accurate. Thus, we propose for future work incorporating the classifier into the generative model, in a similar fashion that it was done on images by BIBREF46 . This network could also include the discriminative model to generate examples from a distribution that is more similar to the original training distribution. Finally, constructing a dataset requires a lot of intensive manual work that mainly consists of writing text with some creativity. To extend the original dataset human users could just validate or correct the generated examples. On top of that we would like to develop active learning methods to identify incorrect generated examples that would most improve the dataset if corrected." ], [ "This work was supported by the Slovenian Research Agency and the ICT Programme of the EC under XLike (ICT-STREP-288342) and XLime (FP7-ICT-611346)." ], [ "In this section more generated hypotheses are presented. Each example starts with the original example data. Then, several hypotheses generated with from the original example with our best model are displayed.", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "" ] ], "section_name": [ "Introduction", "Related Work", "Models", "Recurrent Neural Networks", "Classification model", "Generative models", "Generating hypotheses", "Discriminative model", "Dataset Generation", "Experiment details", "Results", "Preliminary evaluation", "Other models", "Qualitative evaluation", "Conclusion", "Acknowledgements", "More Examples" ] }
{ "answers": [ { "annotation_id": [ "2cf0a303727b51c1d38502912a5b727cfce62ac0" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6." ], "extractive_spans": [], "free_form_answer": "82.0%", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "c4cd6a88d1cc4a94a66c3efb5ef7f0a0a6eccae5" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "What is the highest accuracy score achieved?", "What is the size range of the datasets?" ], "question_id": [ "ea6764a362bac95fb99969e9f8c773a61afd8f39", "62c4c8b46982c3fcf5d7c78cd24113635e2d7010" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Table 1: Three NLI examples from SNLI.", "Figure 1: Evaluation of NLI generative models. Note that both datasets are split on training test and validation sets.", "Figure 2: NLI classification model", "Figure 3: Generative models architecture. The rounded boxes represent trainable parameters, blue boxes are inputs, green boxes are outputs and the orange box represents the mapping embeddings. 0-Hypo denotes the shifted <null>-started hypothesis. Note that in EncoderDecoder model the latent representation Z is just a hidden layer, while in EmebedDecoder it is a trainable parameter matrix.", "Figure 4: AttEmbedDecoder model", "Table 2: Generated examples to illustrate the proposed appraoch.", "Figure 5: Accuracies of the unfiltered generated datasets classified by OrigClass. A dataset was generated for each generative model with different latent dimension z ∈ [2, 4, 8, 16, 32, 147]. For each dataset the examples were classified with OrigClass. The predicted labels were taken as a golden truth and were compared to the labels of the generated dataset to measure its accuracy. The accuracies were measured for all the labels together and for each label separately.", "Figure 6: Accuracies of classifiers trained on the generated dataset and tested on the original test set and the generated development sets. A dataset was generated for each generative model with different latent dimension z ∈ [2, 4, 8, 16, 32, 147]. From these unfiltered datasets new datasets were created by filtering according to various prediction thresholds (0.0, 0.3, 0.6, 0.9), which also represent chart lines. A classifier was trained on each of the datasets. Each point represents the accuracy of a single classifier. The classifiers were evaluated on the original test set in Figure 6a. Each classifier was evaluated on its corresponding generated development set in Figure 6b.", "Figure 7: Comparison of unfiltered generated datasets using various metrics. Each dataset was generated by a model with a different latent dimension, then each metric was applied on each dataset. For metrics other than classifier accuracy and discriminator error rate, the metric was applied on each example and the average was calculated for each dataset.", "Table 3: Comparison of generative models. Column |θtotal| is the total number of trainable parameters. Column |θ∗| represents the number of parameters that are updated with each training example. Thus, hierarchical softmax and latent representation parameters are excluded from this measure. Columns acc@0.0 and acc@0.6 represent the accuracy of the classifier trained on the unfiltered dataset and on the dataset filtered with threshold 0.6, respectively. Column acc-data presents the accuracy of the unfiltered development dataset evaluated by OrigClass. Column nll presents the negative log-likelihood of the unfiltered development dataset. The error rates of the discriminative models are presented by disc-er.", "Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6.", "Table 5: Discrimination error rate on the development set and a sample of 200 examples, evaluated by the discriminative model and human evaluator" ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "6-Figure2-1.png", "6-Figure3-1.png", "9-Figure4-1.png", "11-Table2-1.png", "12-Figure5-1.png", "12-Figure6-1.png", "14-Figure7-1.png", "15-Table3-1.png", "15-Table4-1.png", "16-Table5-1.png" ] }
[ "What is the highest accuracy score achieved?" ]
[ [ "1607.06025-15-Table4-1.png" ] ]
[ "82.0%" ]
405
1909.04181
BERT-Based Arabic Social Media Author Profiling
We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks.
{ "paragraphs": [ [ "The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.", "In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude." ], [ "For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}." ], [ "As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models." ], [ "Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\\mu =0$, and $\\sigma =1$, i.e., $W \\sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs." ], [ "For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender." ], [ "To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender." ], [ "Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender." ], [ "First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV.", "Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy." ], [ "In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods." ], [ "We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca)." ] ], "section_name": [ "Introduction", "Data", "Experiments", "Experiments ::: Tweet-Level Models ::: Baseline GRU.", "Experiments ::: Tweet-Level Models ::: BERT.", "Experiments ::: Tweet-Level Models ::: Data Augmentation.", "Experiments ::: User-Level Models", "Experiments ::: APDA@FIRE2019 submission", "Conclusion", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "5d53fcb4aec782f6e44f8f9c9654f7a112c00fe3" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "60c1f399c201faa406a684e50edbe6472aa327b5" ], "answer": [ { "evidence": [ "Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\\mu =0$, and $\\sigma =1$, i.e., $W \\sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Our baseline is a GRU network for each of the three tasks." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2d013b7e3918dfb512797cbaf16f4c4dbbd34cde" ], "answer": [ { "evidence": [ "To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender." ], "extractive_spans": [ "we manually label an in-house dataset of 1,100 users with gender tags", "we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task" ], "free_form_answer": "", "highlighted_evidence": [ "To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines.", "For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "94dba316338601e012026afb46fd0a811da6dd0a" ], "answer": [ { "evidence": [ "For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}." ], "extractive_spans": [], "free_form_answer": "Data released for APDA shared task contains 3 datasets.", "highlighted_evidence": [ "For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Does the paper report F1-scores for the age and language variety tasks?", "Are the models compared to some baseline models?", "What are the in-house data employed?", "What are the three datasets used in the paper?" ], "question_id": [ "e9cfe3f15735e2b0d5c59a54c9940ed1d00401a2", "52ed2eb6f4d1f74ebdc4dcddcae201786d4c0463", "2c576072e494ab5598667cd6b40bc97fdd7d92d7", "8602160e98e4b2c9c702440da395df5261f55b1f" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "dialect", "dialect", "dialect", "dialect" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1. Tweet level results on DEV", "Table 2. Results of our submissions on official test data (user level)" ], "file": [ "3-Table1-1.png", "4-Table2-1.png" ] }
[ "What are the three datasets used in the paper?" ]
[ [ "1909.04181-Data-0" ] ]
[ "Data released for APDA shared task contains 3 datasets." ]
406
1909.00252
Humor Detection: A Transformer Gets the Last Laugh
Much previous work has been done in attempting to identify humor in text. In this paper we extend that capability by proposing a new task: assessing whether or not a joke is humorous. We present a novel way of approaching this problem by building a model that learns to identify humorous jokes based on ratings gleaned from Reddit pages, consisting of almost 16,000 labeled instances. Using these ratings to determine the level of humor, we then employ a Transformer architecture for its advantages in learning from sentence context. We demonstrate the effectiveness of this approach and show results that are comparable to human performance. We further demonstrate our model's increased capabilities on humor identification problems, such as the previously created datasets for short jokes and puns. These experiments show that this method outperforms all previous work done on these tasks, with an F-measure of 93.1% for the Puns dataset and 98.6% on the Short Jokes dataset.
{ "paragraphs": [ [ "Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7.", "The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting.", "What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network BIBREF8 and have recently included the inception BIBREF9 and progress of the Attention mechanism BIBREF10, BIBREF11, and the Transformer architecture BIBREF12." ], [ "In the related work of joke identification, we find a myriad of methods employed over the years: statistical and N-gram analysis BIBREF13, Regression Trees BIBREF14, Word2Vec combined with K-NN Human Centric Features BIBREF15, and Convolutional Neural Networks BIBREF4.", "This previous research has gone into many settings where humor takes place. BIBREF4 studied audience laughter compared to textual transcripts in order to identify jokes in conversation, while much work has also gone into using and creating datasets like the Pun of the Day BIBREF15, 16000 One-liners BIBREF16, and even Ted Talks BIBREF4." ], [ "We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use." ], [ "Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.", "Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes." ], [ "The Short Jokes dataset, found on Kaggle, contains 231,657 short jokes scraped from various joke websites with lengths ranging from 10 to 200 characters. The previous work by BIBREF4 combined this dataset with the WMT162 English news crawl. Although their exact combined dataset is not publicly available, we used the same method and news crawl source to create a similar dataset. We built this new Short Jokes dataset by extracting sentences from the WMT162 news crawl that had the same distribution of words and characters as the jokes in the Short Jokes dataset on Kaggle. This was in order to match the two halves (jokes and non-jokes) as closely as possible." ], [ "This dataset was scraped by BIBREF15 and contains 16001 puns and 16002 not-punny sentences. We gratefully acknowledge their help in putting together and giving us use of this dataset. These puns were constructed from the Pun of the Day website while the negative samples were gathered from news websites." ], [ "In this section we will discuss the methods and model used in our experiments." ], [ "We have chosen to use the pre-trained BERT BIBREF17 as the base of our model. BERT is a multi-layer bidirectional Transformer encoder and was initially trained on a 3.3 billion word corpus. The model can be fined-tuned with another additional output layer for a multitude of other tasks. We chose to use this Transformer based model as our initial platform because of its success at recognizing and attending to the most important words in both sentence and paragraph structures.", "In Figure 1, originally designed by BIBREF12, we see the architecture of a Transformer model: the initial input goes up through an encoder, which has two parts: a multi-headed self attention layer, followed by a feed-forward network. It then outputs the information into the decoder, which includes the previously mentioned layers, plus an additional masked attention step. Afterwords, it is transformed through a softmax into the output. This model's success is in large part due to the Transformer's self-attention layers.", "We chose a learning rate of 2e-05 and a max sequence length of 128. We trained the model for a maximum of 7 epochs, creating checkpoints along the way." ], [ "Since our data was unbalanced we decided to upsample the humorous jokes in training. We split the dataset into a 75/25 percent split, stratifying with the labels. We then upsampled the minority class in the training set until it reached an even 50 percent. This helped our model learn in a more balanced way despite the uneven amount of non-humorous jokes. Our validation and test sets were composed of the remaining 25%, downsampling the data into a 50/50 class split so that the accuracy metric could be balanced and easily understood.", "To show how our model compares to the previous work done, we also test on the Short Joke and Pun datasets mentioned in the Data section. For these datasets we will use the metrics (Accuracy, Precision, Recall, and F1 Score) designated in BIBREF4 as a comparison. We use the same model format as previously mentioned, trained on the Reddit dataset. We then immediately apply the model to predict on the Short Joke and Puns dataset, without further fine-tuning, in order to compare the model. However, because both the Puns and Short Joke datasets have large and balanced labels, we do so without the upsampling and downsampling steps used for the Reddit dataset." ], [ "In this section we will introduce the baselines and models used in our experiments." ], [ "In order to have fair baselines, we used the following two models: a CNN with Highway Layers as described by BIBREF4 and developed by BIBREF18, and human performance from a study on Amazon's Mechanical Turk. We wanted to have the general population rate these same jokes, thus showing the difference between a general audience and a specific subset of the population, in particular, Reddit r/Jokes users. Since the Reddit users obviously found these jokes humorous, this experiment would show whether or not a more general population agreed with those labels.", "We had 199 unique participants rate an average of 30 jokes each with the prompt \"do you find this joke humorous?\" If the participant was evaluating a sample from a body or punchline only dataset we prefaced our question with a sentence explaining that context, for example: \"Below is the punchline of a joke. Based on this punchline, do you think you would find this joke humorous?\" Taking these labels, we used the most frequently chosen tag from a majority vote to calculate the percentages found in the Human section of Table 2." ], [ "In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.", "In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence.", "Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).", "The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features." ], [ "Considering that a joke's humor value is subjective, the results on the Reddit dataset are surprising. The model has used the context of the words to determine, with high probability, what an average Reddit r/Jokes viewer will find humorous. When we look at the general population's opinion as well, we find a stark difference between their preferences and those of the Reddit users (Table 2). We would hypothesize that our model is learning the specific type of humor enjoyed by those who use the Reddit r/Jokes forum. This would suggest that humor can be learned for a specific subset of the population.", "The model's high accuracy and F1 scores on the Short Jokes and Pun of the Day dataset show the effectiveness of the model for transfer learning. This result is not terribly surprising. If the model can figure out which jokes are funny, it seems to be an easier task to tell when something isn't a joke at all.", "Although these results have high potential, defining the absolute truth value for a joke's humor is a challenging, if not impossible task. However, these results indicate that, at least for a subset of the population, we can find and identify jokes that will be most humorous to them." ], [ "In this paper, we showed a method to define the measure of a joke's humor. We explored the idea of using machine learning tools, specifically a Transformer neural network architecture, to discern what jokes are funny and what jokes are not. This proposed model does not require any human interaction to determine, aside from the text of the joke itself, which jokes are humorous. This architecture can predict the level of humor for a specific audience to a higher degree than a general audience consensus. We also showed that this model has increased capability in joke identification as a result, with higher accuracy and F1 scores than previous work on this topic." ] ], "section_name": [ "Introduction", "Related Work", "Data", "Data ::: Reddit", "Data ::: Short Jokes", "Data ::: Pun of the Day", "Methods", "Methods ::: Our Model", "Methods ::: Training", "Experiments", "Experiments ::: Baselines", "Experiments ::: Results", "Discussion", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "71d3ca59dc8457559c2e3457c62b41d2c30b5ab9" ], "answer": [ { "evidence": [ "In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence." ], "extractive_spans": [ "the punchline of the joke " ], "free_form_answer": "", "highlighted_evidence": [ "In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "e9443412fb9215a3e7ff2977910ecea13b210df7" ], "answer": [ { "evidence": [ "Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).", "In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.", "The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.", "FLOAT SELECTED: Table 2: Results of Accuracy on Reddit Jokes dataset", "FLOAT SELECTED: Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.", "FLOAT SELECTED: Table 4: Results on Short Jokes Identification" ], "extractive_spans": [], "free_form_answer": "It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%", "highlighted_evidence": [ "Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).", "In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. ", "The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.", "FLOAT SELECTED: Table 2: Results of Accuracy on Reddit Jokes dataset", "FLOAT SELECTED: Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.", "FLOAT SELECTED: Table 4: Results on Short Jokes Identification" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "ab3e49cf8831f98000817f90a25917717c291570" ], "answer": [ { "evidence": [ "The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting." ], "extractive_spans": [ "a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread", "These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. " ], "free_form_answer": "", "highlighted_evidence": [ "Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "2d2b370176c95c6b6c8db5f13bd2f9c7f860f37d" ], "answer": [ { "evidence": [ "Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.", "Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes." ], "extractive_spans": [ "The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes." ], "free_form_answer": "", "highlighted_evidence": [ "Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.\n\nSome sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "Which part of the joke is more important in humor?", "What is improvement in accuracy for short Jokes in relation other types of jokes?", "What kind of humor they have evaluated?", "How they evaluate if joke is humorous or not?" ], "question_id": [ "89e1e0dc5d15a05f8740f471e1cb3ddd296b8942", "2815bac42db32d8f988b380fed997af31601f129", "de03e8cc1ceaf2108383114460219bf46e00423c", "8a276dfe748f07e810b3944f4f324eaf27e4a52c" ], "question_writer": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Example format of the Reddit Jokes dataset", "Table 2: Results of Accuracy on Reddit Jokes dataset", "Figure 1: Transformer Model Architecture", "Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms.", "Table 4: Results on Short Jokes Identification" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Figure1-1.png", "4-Table3-1.png", "4-Table4-1.png" ] }
[ "What is improvement in accuracy for short Jokes in relation other types of jokes?" ]
[ [ "1909.00252-Experiments ::: Results-0", "1909.00252-3-Table2-1.png", "1909.00252-Experiments ::: Results-2", "1909.00252-Experiments ::: Results-3", "1909.00252-4-Table4-1.png", "1909.00252-4-Table3-1.png" ] ]
[ "It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%" ]
409
1808.09920
Question Answering by Reasoning Across Documents with Graph Convolutional Networks
Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a neural model which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph while edges encode relations between different mentions (e.g., within-and crossdocument coreference). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on a multi-document question answering dataset, WIKIHOP (Welbl et al., 2018).
{ "paragraphs": [ [ "The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.", "Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.", "Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.", "Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .", "The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.", "In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.", "Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:" ], [ "In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph." ], [ "The WikiHop dataset comprises of tuples $\\langle q, S_q, C_q, a^\\star \\rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\\star \\in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\\langle s, r, o \\rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .", "The goal is to learn a model that can identify the correct answer $a^\\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation." ], [ "In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \\langle s, r, ? \\rangle $ , we identify mentions in $S_q$ of the entities in $C_q \\cup \\lbrace s\\rbrace $ and create one node per mention. This process is based on the following heuristic:", "we consider mentions spans in $S_q$ exactly matching an element of $C_q \\cup \\lbrace s\\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.", "we use predictions from a coreference resolution system to add mentions of elements in $C_q \\cup \\lbrace s\\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .", "we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.", "To each node $v_i$ , we associate a continuous annotation $\\mathbf {x}_i \\in \\mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section \"Node annotations\" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.", "Our model then approaches multi-step reasoning by transforming node representations (Section \"Node annotations\" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section \"Entity relational graph convolutional network\" we describe the propagation rule.", "Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.", "We start with node representations $\\lbrace \\mathbf {h}_i^{(0)}\\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\\lbrace \\mathbf {h}_i^{(L)}\\rbrace _{i=1}^N$ . Together with a representation $\\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \\in C_q$ as an answer is then ", "$$ \nP(c|q, C_q, S_q) \\propto \\exp \\left(\\max _{i \\in \\mathcal {M}_c} f_o([\\mathbf {q}, \\mathbf {h}^{(L)}_i]) \\right)\\;,$$ (Eq. 16) ", "where $f_o$ is a parameterized affine transformation, and $\\mathcal {M}_c$ is the set of node indices such that $i\\in \\mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes." ], [ "Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).", "We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).", "ELMo encodings are used to produce a set of representations $\\lbrace \\mathbf {x}_i\\rbrace _{i=1}^N$ , where $\\mathbf {x}_i \\in \\mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.", "ELMo encodings are used to produce a query representation $\\mathbf {q} \\in \\mathbb {R}^K$ as well. Here, $\\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\\mathbf {q}$ is used to compute a query-dependent representation of mentions $\\lbrace \\mathbf { \\hat{x}}_i\\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\\mathbf {\\hat{x}}_i = f_x(\\mathbf {q}, \\mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network." ], [ "Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\\mathbf {h}_i^{(0)} = \\mathbf {\\hat{x}}_i$ . Then, at each layer $0\\le \\ell \\le L$ , the update message $\\mathbf {u}_i^{(\\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\\mathbf {h}^{(\\ell )}_i$ and transformations of its neighbours: ", "$$\\mathbf {u}^{(\\ell )}_i = f_s(\\mathbf {h}^{(\\ell )}_i) + \\frac{1}{|\\mathcal {N}_i|} \\sum _{j \\in \\mathcal {N}_i} \\sum _{r \\in \\mathcal {R}_{ij}} f_r(\\mathbf {h}_j^{(\\ell )})\\;,$$ (Eq. 22) ", "where $\\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\\in \\mathcal {R}$ . Recall the available relations from Section \"Ablation study\" , namely, $\\mathcal {R} =\\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\\rbrace $ .", "A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as ", "$$\\mathbf {a}^{(\\ell )}_i = \\sigma \\left( f_a\\left([\\mathbf {u}^{(\\ell )}_i, \\mathbf {h}^{(\\ell )}_i ]\\right) \\right) \\;,$$ (Eq. 23) ", "where $\\sigma (\\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message: ", "$$\\mathbf {h}^{(\\ell + 1)}_i = \\phi (\\mathbf {u}^{(\\ell )}_i) \\odot \\mathbf {a}^{(\\ell )}_i + \\mathbf {h}^{(\\ell )}_i \\odot (1 - \\mathbf {a}^{(\\ell )}_i ) \\;,$$ (Eq. 24) ", "where $\\phi (\\cdot )$ is any nonlinear function (we used $\\tanh $ ) and $\\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability)." ], [ "In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix \"Implementation and experiments details\" in the supplementary material for a description of the hyper-parameters of our model and training details." ], [ "In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.", "Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\\arg \\max \\limits _c \\prod \\limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model." ], [ "To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.", "We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.", "The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.", "In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.", "We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.", "Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.", "We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\\mathbf {\\hat{x}}_i, \\mathbf {\\hat{x}}_j) = \\sigma \\left( \\mathbf {\\hat{x}}^\\top _i \\mathbf {W}_e \\mathbf {\\hat{x}}_j \\right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section \"Node annotations\" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.", "Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.", "In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases." ], [ "In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.", "Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.", "In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases." ], [ "In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \\in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.", "Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work." ], [ "We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings." ], [ "We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002." ], [ "See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps", "ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\\lbrace \\mathbf {x}_i\\rbrace _{i=1}^N$ .", "For the query representation $\\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.", "ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\\lbrace \\mathbf {\\hat{x}}_i\\rbrace _{i=1}^N \\in \\mathbb {R}^{512}$ .", "All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).", "Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\\lbrace \\mathbf {h}_i^{(L)}\\rbrace _{i=1}^N$ and $\\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).", "During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \\ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required." ], [ "We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\\beta _1=0.9$ , $\\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set." ] ], "section_name": [ "Introduction", "Method", "Dataset and task abstraction", "Reasoning on an entity graph", "Node annotations", "Entity relational graph convolutional network", "Experiments", "Comparison", "Ablation study", "Error analysis", "Related work", "Conclusion", "Acknowledgments", "Architecture", "Training details" ] }
{ "answers": [ { "annotation_id": [ "e24ec730b51654dd114621a19ddd11dfa3f0ae2a" ], "answer": [ { "evidence": [ "In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set." ], "extractive_spans": [], "free_form_answer": "Human, FastQA, BiDAF, Coref-GRU, MHPGM, Weaver / Jenga, MHQA-GRN", "highlighted_evidence": [ "In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "42ad1841de98c8e37157fac3129e3c0079f78f99" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "4ca6ee7a6d4de633d423c68b6762fe5868451d48" ], "answer": [ { "evidence": [ "To each node $v_i$ , we associate a continuous annotation $\\mathbf {x}_i \\in \\mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section \"Node annotations\" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges)." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "65cbad27686d3e040adfd43ad860fe2f8cff4ba5" ], "answer": [ { "evidence": [ "To each node $v_i$ , we associate a continuous annotation $\\mathbf {x}_i \\in \\mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section \"Node annotations\" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph." ], "extractive_spans": [], "free_form_answer": "Assign a value to the relation based on whether mentions occur in the same document, if mentions are identical, or if mentions are in the same coreference chain.", "highlighted_evidence": [ "We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "2d8dac8e7ff4bed872bb06584732abc283e941e6" ], "answer": [ { "evidence": [ "In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \\langle s, r, ? \\rangle $ , we identify mentions in $S_q$ of the entities in $C_q \\cup \\lbrace s\\rbrace $ and create one node per mention. This process is based on the following heuristic:", "we consider mentions spans in $S_q$ exactly matching an element of $C_q \\cup \\lbrace s\\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.", "we use predictions from a coreference resolution system to add mentions of elements in $C_q \\cup \\lbrace s\\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .", "we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity." ], "extractive_spans": [], "free_form_answer": "Exact matches to the entity string and predictions from a coreference resolution system", "highlighted_evidence": [ " For a given query $q = \\langle s, r, ? \\rangle $ , we identify mentions in $S_q$ of the entities in $C_q \\cup \\lbrace s\\rbrace $ and create one node per mention. This process is based on the following heuristic:\n\nwe consider mentions spans in $S_q$ exactly matching an element of $C_q \\cup \\lbrace s\\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.\n\nwe use predictions from a coreference resolution system to add mentions of elements in $C_q \\cup \\lbrace s\\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .\n\nwe discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "be2e86638cfc01969b3db5645f4c12359110f2e2" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo – without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ], "extractive_spans": [ "Accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo – without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "b19f61f9181c099a885f78baa7ee5fa1ce9621dc" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo – without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ], "extractive_spans": [], "free_form_answer": "During testing: 67.6 for single model without coreference, 66.4 for single model with coreference, 71.2 for ensemble of 5 models", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo – without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "question": [ "What baseline did they compare Entity-GCN to?", "How many documents at a time can Entity-GCN handle?", "Did they use a relation extraction method to construct the edges in the graph?", "How did they get relations between mentions?", "How did they detect entity mentions?", "What is the metric used with WIKIHOP?", "What performance does the Entity-GCN get on WIKIHOP?" ], "question_id": [ "c4a6b727769328333bb48d59d3fc4036a084875d", "bbeb74731b9ac7f61e2d74a7d9ea74caa85e62ef", "93e8ce62361b9f687d5200d2e26015723721a90f", "d05d667822cb49cefd03c24a97721f1fe9dc0f4c", "2a1e6a69e06da2328fc73016ee057378821e0754", "63403ffc0232ff041f3da8fa6c30827cfd6404b7", "a25c1883f0a99d2b6471fed48c5121baccbbae82" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "", "", "", "", "" ], "topic_background": [ "research", "research", "research", "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1: A sample from WIKIHOP where multi-step reasoning and information combination from different documents is necessary to infer the correct answer.", "Figure 2: Supporting documents (dashed ellipses) organized as a graph where nodes are mentions of either candidate entities or query entities. Nodes with the same color indicates they refer to the same entity (exact match, coreference or both). Nodes are connected by three simple relations: one indicating co-occurrence in the same document (solid edges), another connecting mentions that exactly match (dashed edges), and a third one indicating a coreference (bold-red line).", "Table 1: WIKIHOP dataset statistics from Welbl et al. (2018): number of candidates and documents per sample and document length.", "Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set. Our Entity-GCN outperforms recent prior work without learning any language model to process the input but relying on a pretrained one (ELMo – without fine-tunning it) and applying R-GCN to reason among entities in the text. * with coreference for unmasked dataset and without coreference for the masked one.", "Table 3: Ablation study on WIKIHOP validation set. The full model is our Entity-GCN with all of its components and other rows indicate models trained without a component of interest. We also report baselines using GloVe instead of ELMo with and without R-GCN. For the full model we report mean±1 std over 5 runs.", "Table 4: Accuracy and precision at K (P@K in the table) analysis overall and per query type. Avg. |Cq| indicates the average number of candidates with one standard deviation.", "Table 5: Model architecture.", "Table 6: Samples from WIKIHOP set where Entity-GCN fails. p indicates the predicted likelihood.", "Figure 3: Accuracy (blue) of our best single model with respect to the candidate set size (on the top) and nodes set size (on the bottom) on the validation set. Re-scaled data distributions (orange) per number of candidate (top) and nodes (bottom). Dashed lines indicate average accuracy." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "11-Table5-1.png", "12-Table6-1.png", "13-Figure3-1.png" ] }
[ "What baseline did they compare Entity-GCN to?", "How did they get relations between mentions?", "How did they detect entity mentions?", "What performance does the Entity-GCN get on WIKIHOP?" ]
[ [ "1808.09920-Comparison-0" ], [ "1808.09920-Reasoning on an entity graph-4" ], [ "1808.09920-Reasoning on an entity graph-3", "1808.09920-Reasoning on an entity graph-2", "1808.09920-Reasoning on an entity graph-0", "1808.09920-Reasoning on an entity graph-1" ], [ "1808.09920-6-Table2-1.png" ] ]
[ "Human, FastQA, BiDAF, Coref-GRU, MHPGM, Weaver / Jenga, MHQA-GRN", "Assign a value to the relation based on whether mentions occur in the same document, if mentions are identical, or if mentions are in the same coreference chain.", "Exact matches to the entity string and predictions from a coreference resolution system", "During testing: 67.6 for single model without coreference, 66.4 for single model with coreference, 71.2 for ensemble of 5 models" ]
411
1809.01060
The Effect of Context on Metaphor Paraphrase Aptness Judgments
We conduct two experiments to study the effect of context on metaphor paraphrase aptness judgments. The first is an AMT crowd source task in which speakers rank metaphor paraphrase candidate sentence pairs in short document contexts for paraphrase aptness. In the second we train a composite DNN to predict these human judgments, first in binary classifier mode, and then as gradient ratings. We found that for both mean human judgments and our DNN's predictions, adding document context compresses the aptness scores towards the center of the scale, raising low out of context ratings and decreasing high out of context scores. We offer a provisional explanation for this compression effect.
{ "paragraphs": [ [ "A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic\".", "It is possible to express an opinion about some metaphors' and similes' aptness (at least to a degree) without previously knowing what they are trying to convey, or the context in which they appear. For example, we don't need a particular context or frame of reference to construe the simile She was screaming like a turtle as strange, and less apt for expressing the quality of a scream than She was screaming like a banshee. In this case, the reason why the simile in the second sentence works best is intuitive. A salient characteristic of a banshee is a powerful scream. Turtles are not known for screaming, and so it is harder to define the quality of a scream through such a comparison, except as a form of irony. Other cases are more complicated to decide upon. The simile crying like a fire in the sun (It's All Over Now, Baby Blue, Bob Dylan) is powerfully apt for many readers, but simply odd for others. Fire and sun are not known to cry in any way. But at the same time the simile can capture the association we draw between something strong and intense in other senses - vision, touch, etc. - and a loud cry.", "Nonetheless, most metaphors and similes need some kind of context, or external reference point to be interpreted. The sentence The old lady had a heart of stone is apt if the old lady is cruel or indifferent, but it is inappropriate as a description of a situation in which the old lady is kind and caring. We assume that, to an average reader's sensibility, the sentence models the situation in a satisfactory way only in the first case.", "This is the approach to metaphor aptness that we assume in this paper. Following BIBREF3 , we treat a metaphor as apt in relation to a literal expression that it paraphrases. If the metaphor is judged to be a good paraphrase, then it closely expresses the core information of the literal sentence through its metaphorical shift. We refer to the prediction of readers' judgments on the aptness candidates for the literal paraphrase of a metaphor as the metaphor paraphrase aptness task (MPAT). BIBREF3 address the MPAT by using Amazon Mechanical Turk (AMT) to obtain crowd sourced annotations of metaphor-paraphrase candidate pairs. They train a composite Deep Neural Network (DNN) on a portion of their annotated corpus, and test it on the remaining part. Testing involves using the DNN as a binary classifier on paraphrase candidates. They derive predictions of gradient paraphrase aptness for their test set, and assess them by Pearson coefficient correlation to the mean judgments of their crowd sourced annotation of this set. Both training and testing are done independently of any document context for the metaphorical sentence and its literal paraphrase candidates.", "In this paper we study the role of context on readers' judgments concerning the aptness of metaphor paraphrase candidates. We look at the accuracy of BIBREF3 's DNN when trained and tested on contextually embedded metaphor-paraphrase pairs for the MPAT. In Section SECREF2 we describe an AMT experiment in which annotators judge metaphors and paraphrases embodied in small document contexts, and in Section SECREF3 we discuss the results of this experiment. In Section SECREF4 we describe our MPAT modeling experiment, and in Section SECREF5 we discuss the results of this experiment. Section SECREF6 briefly surveys some related work. In Section SECREF7 we draw conclusions from our study, and we indicate directions for future work in this area." ], [ " BIBREF3 have recently produced a dataset of paraphrases containing metaphors designed to allow both supervised binary classification and gradient ranking. This dataset contains several pairs of sentences, where in each pair the first sentence contains a metaphor, and the second is a literal paraphrase candidate.", "This corpus was constructed with a view to representing a large variety of syntactic structures and semantic phenomena in metaphorical sentences. Many of these structures and phenomena do not occur as metaphorical expressions, with any frequency, in natural text and were therefore introduced through hand crafted examples.", "Each pair of sentences in the corpus has been rated by AMT annotators for paraphrase aptness on a scale of 1-4, with 4 being the highest degree of aptness. In BIBREF3 's dataset, sentences come in groups of five, where the first element is the “reference element\" with a metaphorical expression, and the remaining four sentences are “candidates\" that stand in a degree of paraphrasehood to the reference. Here is an example of a metaphor-paraphrase candidate pair.", "The average AMT paraphrase score for this pair is 4.0, indicating a high degree of aptness.", "We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example.", "One of the authors constructed most of these contexts by hand. In some cases, it was possible to locate the original metaphor in an existing document. This was the case for", "For these cases, a variant of the existing context was added to both the metaphorical and the literal sentences. We introduced small modifications to keep the context short and clear, and to avoid copyright issues. We lightly modified the contexts of metaphors extracted from corpora when the original context was too long, ie. when the contextual sentences of the selected metaphor were longer than the maximum length we specified for our corpus. In such cases we reduced the length of the sentence, while sustaining its meaning.", "The context was designed to sound as natural as possible. Since the same context is used for metaphors and their literal candidate paraphrases, we tried to design short contexts that make sense for both the figurative and the literal sentences, even when the pair had been judged as non-paraphrases. We kept the context as neutral as possible in order to avoid a distortion in crowd source ratings.", "For example, in the following pair of sentences, the literal sentence is not a good paraphrase of the figurative one (a simile).", "We opted for a context that is natural for both sentences.", "We sought to avoid, whenever possible, an incongruous context for one of the sentences that could influence our annotators' ratings.", "We collected a sub-corpus of 200 contextually embedded pairs of sentences. We tried to keep our data as balanced as possible, drawing from all four rating classes of paraphrase aptness ratings (between 1 to 4) that BIBREF3 obtained. We selected 44 pairs of 1 ratings, 51 pairs of 2, 43 pairs of 3 and 62 pairs of 4.", "We then used AMT crowd sourcing to rate the contextualized paraphrase pairs, so that we could observe the effect of document context on assessments of metaphor paraphrase aptness.", "To test the reproducibility of BIBREF3 's ratings, we launched a pilot study for 10 original non-contextually embedded pairs, selected from all four classes of aptness. We observed that the annotators provided mean ratings very similar to those reported in BIBREF3 . The Pearson coefficent correlation between the mean judgments of our out-of-context pilot annotations and BIBREF3 's annotations for the same pair was over 0.9. We then conducted an AMT annotation task for the 200 contextualised pairs. On average, 20 different annotators rated each pair. We considered as “rogue\" those annotators who rated the large majority of pairs with very high or very low scores, and those who responded inconsistently to two “trap\" pairs. After filtering out the rogues, we had an average of 14 annotators per pair." ], [ "We found a Pearson correlation of 0.81 between the in-context and out-of-context mean human paraphrase ratings for our two corpora. This correlation is virtually identical to the one that BIBREF5 report for mean acceptability ratings of out-of-context to in-context sentences in their crowd source experiment. It is interesting that a relatively high level of ranking correspondence should occur in mean judgments for sentences presented out of and within document contexts, for two entirely distinct tasks.", "Our main result concerns the effect of context on mean paraphrase judgment. We observed that it tends to flatten aptness ratings towards the center of the rating scale. 71.1% of the metaphors that had been considered highly apt (average rounded score of 4) in the context-less pairs received a more moderate judgment (average rounded score of 3), but the reverse movement was rare. Only 5% of pairs rated 3 out of context (2 pairs) were boosted to a mean rating of 4 in context. At the other end of the scale, 68.2% of the metaphors judged at 1 category of aptness out of context were raised to a mean of 2 in context, while only the 3.9% of pairs rated 2 out of context were lowered to 1 in context.", "Ratings at the middle of the scale - 2 (defined as semantically related non-paraphrases) and 3 (imperfect or loose paraphrases) - remained largely stable, with little movement in either direction. 9.8% of pairs rated 2 were re-ranked as 3 when presented in context, and 10% of pairs ranked at 3 changed to 2. The division between 2 and 3 separates paraphrases from non-paraphrases. Our results suggest that this binary rating of paraphrase aptness was not strongly affected by context. Context operates at the extremes of our scale, raising low aptness ratings and lowering high aptness ratings. This effect is clearly indicated in the regression chart in Fig FIGREF15 .", "This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.", "This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context.", "If this is the case, an analogous process may generate the same compression effect for metaphor aptness assessment of sentence pairs in context. Speakers may attempt to achieve broader discourse coherence when assessing the metaphor-paraphrase aptness relation in a document context. Out of context they focus more narrowly on the semantic relations between a metaphorical sentence and its paraphrase candidate. Therefore, this relation is at the centre of a speaker's concern, and it receives more fine-grained assessment when considered out of context than in context. This issue clearly requires further research." ], [ "We use the DNN model described in BIBREF3 to predict aptness judgments for in-context paraphrase pairs. It has three main components:", "The encoder for each pair of sentences taken as input is composed of two parallel \"Atrous\" Convolutional Neural Networks (CNNs) and LSTM RNNs, feeding two sequenced fully connected layers.", "The encoder is preloaded with the lexical embeddings from Word2vec BIBREF6 . The sequences of word embeddings that we use as input provides the model with dense word-level information, while the model tries to generalize over these embedding patterns.", "The combination of a CNN and an LSTM allows us to capture both long-distance syntactic and semantic relations, best identified by a CNN, and the sequential nature of the input, most efficiently identified by an LSTM. Several existing studies, cited in BIBREF4 , demonstrate the advantages of combining CNNs and LSTMs to process texts.", "The model produces a single classifier value between 0 and 1. We transform this score into a binary output of 0 or 1 by applying a threshold of 0.5 for assigning 1.", "The architecture of the model is given in Fig FIGREF19 .", "We use the same general protocol as BIBREF3 for training with supervised learning, and testing the model.", "Using BIBREF3 's out-of- context metaphor dataset and our contextualized extension of this set, we apply four variants of the training and testing protocol.", "When we train or test the model on the out-of-context dataset, we use BIBREF3 's original annotated corpus of 800 metaphor-paraphrase pairs. The in-context dataset contains 200 annotated pairs." ], [ "We use the model both to predict binary classification of a metaphor paraphrase candidate, and to generate gradient aptness ratings on the 4 category scale (see BIBREF3 for details). A positive binary classification is accurate if it is INLINEFORM0 a 2.5 mean human rating. The gradient predictions are derived from the softmax distribution of the output layer of the model. The results of our modelling experiments are given in Table TABREF24 .", "The main result that we obtain from these experiments is that the model learns binary classification to a reasonable extent on the in-context dataset, both when trained on the same kind of data (in-context pairs), and when trained on BIBREF3 's original dataset (out-of-context pairs). However, the model does not perform well in predicting gradient in-context judgments when trained on in-context pairs. It improves slightly for this task when trained on out-of-context pairs.", "By contrast, it does well in predicting both binary and gradient ratings when trained and tested on out-of-context data sets.", " BIBREF5 also note a decline in Pearson correlation for their DNN models on the task of predicting human in-context acceptability judgments, but it is less drastic. They attribute this decline to the fact that the compression effect renders the gradient judgments less separable, and so harder to predict. A similar, but more pronounced version of this effect may account for the difficulty that our model encounters in predicting gradient in-context ratings. The binary classifier achieves greater success for these cases because its training tends to polarise the data in one direction or the other.", "We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs.", "We can use this variant (out-of-context training and in-context testing) to perform a fine-grained comparison of the model's predicted ratings for the same sentences in and out of context. When we do this, we observe that out of 200 sentence pairs, our model scores the majority (130 pairs) higher when processed in context than out of context. A smaller but significant group (70 pairs) receives a lower score when processed in context. The first group's average score before adding context (0.48) is consistently lower than that of the second group (0.68). Also, as Table TABREF26 indicates, the pairs that our model rated, out of context, with a score lower than 0.5 (on the model's softmax distribution), received on average a higher rating in context, while the opposite is true for the pairs rated with a score higher than 0.5. In general, sentence pairs that were rated highly out of context receive a lower score in context, and vice versa. When we did linear regression on the DNNs in and out of context predicted scores, we observed substantially the same compression pattern exhibited by our AMT mean human judgments. Figure FIGREF27 plots this regression graph." ], [ " BIBREF7 present ratings of aptness and comprehensibility for 64 metaphors from two groups of subjects. They note that metaphors were perceived as more apt and more comprehensible to the extent that their terms occupied similar positions within dissimilar domains. Interestingly, BIBREF8 also present experimental results to claim that imagery does not clearly correlate with metaphor aptness. Aptness judgments are also subjected to individual differences.", " BIBREF9 points to such individual differences in metaphor processing. She asked 27 participants to rate 37 metaphors for difficulty, aptness and familiarity, and to write one or more interpretations of the metaphor. Subjects with higher working memory span were able to give more detailed and elaborate interpretations of metaphors. Familiarity and aptness correlated with both high and low span subjects. For high span subjects aptness of metaphor positively correlated with number of interpretations, while for low span subjects the opposite was true.", " BIBREF10 analyses the aptness of metaphors with and without extended context. She finds that domain similarity correlates with aptness judgments in isolated metaphors, but not in contextualized metaphors. She also reports that there is no clear correlation between metaphor aptness ratings in isolated and in contextualized examples. BIBREF0 study the relation between aptness and comprehensibility in metaphors and similes. They provide experimental results indicating that aptness is a better predictor than comprehensibility for the “transformation\" of a simile into a metaphor. Subjects tended to remember similes as metaphors (i.e. remember the dancer's arms moved like startled rattlesnakes as the dancer's arms were startled rattlesnakes) if they were judged to be particularly apt, rather than particularly comprehensible. They claim that context might play an important role in this process. They suggest that context should ease the transparency and increase the aptness of both metaphors and similes.", " BIBREF11 present a series of experiments indicating that metaphors tend to be interpreted through emergent features that were not rated as particularly relevant, either for the tenor or for the vehicle of the metaphor. The number of emergent features that subjects were able to draw from a metaphor seems to correlate with their aptness judgments.", " BIBREF12 use Event-Related Brain Potentials (ERPs) to study the temporal dynamics of metaphor processing in reading literary texts. They emphasize the influence of context on the ability of a reader to smoothly interpret an unusual metaphor.", " BIBREF13 use electrophysiological experiments to try to disentangle the effect of a metaphor from that of its context. They find that de-contextualized metaphors elicited two different brain responses, INLINEFORM0 and INLINEFORM1 , while contextualized metaphors only produced the INLINEFORM2 effect. They attribute the INLINEFORM3 effect, often observed in neurological studies of metaphors, to expectations about upcoming words in the absence of a predictive context that “prepares\" the reader for the metaphor. They suggest that the INLINEFORM4 effect reflects the actual interpretative processing of the metaphor.", "This view is supported by several neurological studies showing that the INLINEFORM0 effect arises with unexpected elements, like new presuppositions introduced into a text in a way not implied by the context BIBREF14 , or unexpected associations with a noun-verb combination, not indicated by previous context (for example preceded by neutral context, as in BIBREF15 )." ], [ "We have observed that embedding metaphorical sentences and their paraphrase candidates in a document context generates a compression effect in human metaphor aptness ratings. Context seems to mitigate the perceived aptness of metaphors in two ways. Those metaphor-paraphrase pairs given very low scores out of context receive increased scores in context, while those with very high scores out of context decline in rating when presented in context. At the same time, the demarcation line between paraphrase and non-paraphrase is not particularly affected by the introduction of extended context.", "As previously observed by BIBREF10 , we found that context has an influence on human aptness ratings for metaphors, although, unlike her results, we did find a correlation between the two sets of ratings. BIBREF0 's expectation that context should facilitate a metaphor's aptness was supported only in one sense. Aptness increases for low-rated pairs. But it decreases for high-rated pairs.", "We applied BIBREF3 's DNN for the MAPT to an in-context test set, experimenting with both out-of-context and in-context training corpora. We obtained reasonable results for binary classification of paraphrase candidates for aptness, but the performance of the model declined sharply for the prediction of human gradient aptness judgments, relative to its performance on a corresponding out-of-context test set. This appears to be the result of the increased difficulty in separating rating categories introduced by the compression effect.", "Strikingly, the linear regression analyses of human aptness judgments for in- and out-of-context paraphrase pairs, and of our DNN's predictions for these pairs reveal similar compression patterns. These patterns produce ratings that cannot be clearly separated along a linear ranking scale.", "To the best of our knowledge ours is the first study of the effect of context on metaphor aptness on a corpus of this dimension, using crowd sourced human judgments as the gold standard for assessing the predictions of a computational model of paraphrase. We also present the first comparative study of both human and model judgments of metaphor paraphrase for in-context and out-of-context variants of metaphorical sentences.", "Finally, the compression effect that context induces on paraphrase judgments corresponds closely to the one observed independently in another task, which is reported in BIBREF5 . We regard this effect as a significant discovery that increases the plausibility and the interest of our results. The fact that it appears clearly with two tasks involving different sorts of DNNs and distinct learning regimes (unsupervised learning with neural network language models for the acceptability prediction task discussed, as opposed to supervised learning with our composite DNN for paraphrase prediction) reduces the likelihood that this effect is an artefact of our experimental design.", "While our dataset is still small, we are presenting an initial investigation of a phenomenon which is, to date, little studied. We are working to enlarge our dataset and in future work we will expand both our in- and out-of-context annotated metaphor-paraphrase corpora.", "While the corpus we used contains a number of hand crafted examples, it would be preferable to find these example types in natural corpora, and we are currently working on this. We will be extracting a dataset of completely natural (corpus-driven) examples. We are seeking to expand the size of the data set to improve the reliability of our modelling experiments.", "We will also experiment with alternative DNN architectures for the MAPT. We will conduct qualitative analyses on the kinds of metaphors and similes that are more prone to a context-induced rating switch.", "One of our main concerns in future research will be to achieve a better understanding of the compression effect of context on human judgments and DNN models." ] ], "section_name": [ "Introduction", "Annotating Metaphor-Paraphrase Pairs in Contexts", "Annotation Results", "Modelling Paraphrase Judgments in Context", "MPAT Modelling Results", "Related Cognitive Work on Metaphor Aptness", "Conclusions and Future Work" ] }
{ "answers": [ { "annotation_id": [ "6726336c9aa75736f937d405bdddafff8050dea8" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "2e00f30c8722c239663509282d661ff1be2cb2d9" ], "answer": [ { "evidence": [ "This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.", "This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context." ], "extractive_spans": [ "adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence" ], "free_form_answer": "", "highlighted_evidence": [ "This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts.", "BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "8a606457de03b22295ecc1242c2ceb0698b53cda" ], "answer": [ { "evidence": [ "We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example." ], "extractive_spans": [], "free_form_answer": "Preceding and following sentence of each metaphor and paraphrase are added as document context", "highlighted_evidence": [ "We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "cde782229cf985a9dc78bf950876c58e5e97ddaf" ], "answer": [ { "evidence": [ "We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs." ], "extractive_spans": [], "free_form_answer": "Best performance achieved is 0.72 F1 score", "highlighted_evidence": [ "We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "", "", "", "" ], "question": [ "Do they report results only on English data?", "What provisional explanation do the authors give for the impact of document context?", "What document context was added?", "What were the results of the first experiment?" ], "question_id": [ "46563a1fb2c3e1b39a185e4cbb3ee1c80c8012b7", "6b7d76c1e1a2490beb69609ba5652476dde8831b", "37753fbffc06ce7de6ada80c89f1bf5f190bbd88", "7ee29d657ccb8eb9d5ec64d4afc3ca8b5f3bcc9f" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Figure 1: In-context and out-of-context mean ratings. Points above the broken diagonal line represent sentence pairs which received a higher rating when presented in context. The total least-square linear regression is shown as the second line.", "Figure 2: DNN encoder for predicting metaphorical paraphrase aptness from Bizzoni and Lappin (2018). Each encoder represents a sentence as a 10-dimensional vector. These vectors are concatenated to compute a single score for the pair of input sentences.", "Table 1: F-score binary classification accuracy and Pearson correlation for three different regimens of supervised learning. The * indicates results for a set of 10-fold cross-validation runs. This was necessary in the first case, when training and testing are both on our small corpus of in-context pairs. In the second and third rows, since we are using the full out-of-context and in-context dataset, we report single-run results. The fourth row is Bizzoni and Lappin (2018)’s best run result. (Our single-run best result for the first row is an F-score of 0.8 and a Pearson correlation 0.16).", "Table 2: We show the number of pairs that received a low score out of context (first row) and the number of pairs that received a high score out of context (second row). We report the mean score and standard deviation (Std) of the two groups when judged out of context (OOC) and when judged in context (IC) by our model. The model’s scores range between 0 and 1. As can be seen, the mean of the low-scoring group rises in context, and the mean of the high-scoring group decreases in context.", "Figure 3: In-context and out-of-context ratings assigned by our trained model. Points above the broken diagonal line represent sentence pairs which received a higher rating when presented in context. The total least-square linear regression is shown as the second line." ], "file": [ "5-Figure1-1.png", "6-Figure2-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Figure3-1.png" ] }
[ "What document context was added?", "What were the results of the first experiment?" ]
[ [ "1809.01060-Annotating Metaphor-Paraphrase Pairs in Contexts-4" ], [ "1809.01060-MPAT Modelling Results-4" ] ]
[ "Preceding and following sentence of each metaphor and paraphrase are added as document context", "Best performance achieved is 0.72 F1 score" ]
414
1705.03261
Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers
Drug-drug interaction (DDI) is a vital information when physicians and pharmacists intend to co-administer two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly combined use. In recent years, automatically extracting DDIs from biomedical text has drawn researchers' attention. However, the existing work utilize either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recur- rent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning methods.
{ "paragraphs": [ [ "Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions.", "Traditional DDI databases are manually constructed according to clinical records, scientific research and drug specifications. For instance, The sentence “With combined use, clinicians should be aware, when phenytoin is added, of the potential for reexacerbation of pulmonary symptomatology due to lowered serum theophylline concentrations BIBREF5 ”, which is from a pharmacotherapy report, describe the side effect of phenytoin and theophylline's combined use. Then this information on specific medicines will be added to DDI databases. As drug-drug interactions have being increasingly found, manually constructing DDI database would consume a lot of manpower and resources.", "There has been many efforts to automatically extract DDIs from natural language BIBREF0 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , mainly medical literature and clinical records. These works can be divided into the following categories:", "To avoid complex feature engineering and NLP toolkits' usage, we employ deep learning approaches for sentence comprehension as a whole. Our model takes in a sentence from biomedical literature which contains a drug pair and outputs what kind of DDI this drug pair belongs. This assists physicians refrain from improper combined use of drugs. In addition, the word and sentence level attentions are introduced to our model for better DDI predictions.", "We train our language comprehension model with labeled instances. Figure FIGREF5 shows partial records in DDI corpus BIBREF16 . We extract the sentence and drug pairs in the records. There are 3 drug pairs in this example thus we have 3 instances. The DDI corpus annotate each drug pair in the sentence with a DDI type. The DDI type, which is the most concerned information, is described in table TABREF4 . The details about how we train our model and extract the DDI type from text are described in the remaining sections." ], [ "In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.", "Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods.", "Drug-drug interaction extraction is a relation extraction task of natural language processing. Relation extraction aims to determine the relation between two given entities in a sentence. In recent years, attention mechanism and various neural networks are applied to relation extraction BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Convolutional deep neural network are utilized for extracting sentence level features in BIBREF19 . Then the sentence level features are concatenated with lexical level features, which are obtained by NLP toolkit WordNet BIBREF22 , followed by a multilayer perceptron (MLP) to classify the entities' relation. A fixed work is proposed by Nguyen et al. BIBREF21 . The convolutional kernel is set various size to capture more n-gram features. In addition, the word and position embedding are trained automatically instead of keeping constant as in BIBREF19 . Wang et al. BIBREF20 introduce multi-level attention mechanism to CNN in order to emphasize the keywords and ignore the non-critical words during relation detection. The attention CNN model outperforms previous state-of-the-art methods.", "Besides CNN, Recurrent neural network (RNN) has been applied to relation extraction as well. Zhang et al. BIBREF18 utilize long short-term memory network (LSTM), a typical RNN model, to represent sentence. The bidirectional LSTM chronologically captures the previous and future information, after which a pooling layer and MLP have been set to extract feature and classify the relation. Attention mechanism is added to bidirectional LSTM in BIBREF17 for relation extraction. An attention layer gives each memory cell a weight so that classifier can catch the principal feature for the relation detection. The Attention based bidirectional LSTM has been proven better than previous work." ], [ "In this section, we present our bidirectional recurrent neural network with multiple attention layer model. The overview of our architecture is shown in figure FIGREF15 . For a given instance, which describes the details about two or more drugs, the model represents each word as a vector in embedding layer. Then the bidirectional RNN layer generates a sentence matrix, each column vector in which is the semantic representation of the corresponding word. The word level attention layer transforms the sentence matrix to vector representation. Then sentence level attention layer generates final representation for the instance by combining several relevant sentences in view of the fact that these sentences have the same drug pair. Followed by a softmax classifier, the model classifies the drug pair in the given instance as specific DDI." ], [ "The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.", "We put the sentences with the same drug pairs together as a set, since the sentence level attention layer (will be described in Section SECREF21 ) will use the sentences which contain the same drugs." ], [ "Given an instance INLINEFORM0 which contains specified two drugs INLINEFORM1 , INLINEFORM2 , each word is embedded in a INLINEFORM3 dimensional space ( INLINEFORM4 , INLINEFORM5 are the dimension of word embedding and position embedding). The look up table function INLINEFORM6 maps a word or a relative position to a column vector. After embedding layer the sentence is represented by INLINEFORM7 , where DISPLAYFORM0 ", "The INLINEFORM0 function is usually implemented with matrix-vector product. Let INLINEFORM1 , INLINEFORM2 denote the one-hot representation (column vector) of word and relative distance. INLINEFORM3 , INLINEFORM4 are word and position embedding query matrix. The look up functions are implemented by DISPLAYFORM0 ", "Then the word sequence INLINEFORM0 is fed to the RNN layer. Note that the sentence will be filled with INLINEFORM1 if its length is less than INLINEFORM2 ." ], [ "The words in the sequence are read by RNN's gated recurrent unit (GRU) one by one. The GRU takes the current word INLINEFORM0 and the previous GRU's hidden state INLINEFORM1 as input. The current GRU encodes INLINEFORM2 and INLINEFORM3 into a new hidden state INLINEFORM4 (its dimension is INLINEFORM5 , a hyperparameter), which can be regarded as informations the GRU remembered.", "Figure FIGREF25 shows the details in GRU. The reset gate INLINEFORM0 selectively forgets informations delivered by previous GRU. Then the hidden state becomes INLINEFORM1 . The update gate INLINEFORM2 updates the informations according to INLINEFORM3 and INLINEFORM4 . The equations below describe these procedures. Note that INLINEFORM5 stands for element wise multiplication. DISPLAYFORM0 DISPLAYFORM1 ", "The bidirectional RNN contains forward RNN and backward RNN. Forward RNN reads sentence from INLINEFORM0 to INLINEFORM1 , generating INLINEFORM2 , INLINEFORM3 , ..., INLINEFORM4 . Backward RNN reads sentence from INLINEFORM5 to INLINEFORM6 , generating INLINEFORM7 , INLINEFORM8 , ..., INLINEFORM9 . Then the encode result of this layer is DISPLAYFORM0 ", "We apply dropout technique in RNN layer to avoid overfitting. Each GRU have a probability (denoted by INLINEFORM0 , also a hyperparameter) of being dropped. The dropped GRU has no output and will not affect the subsequent GRUs. With bidirectional RNN and dropout technique, the input INLINEFORM1 is encoded into sentence matrix INLINEFORM2 ." ], [ "The purpose of word level attention layer is to extract sentence representation (also known as feature vector) from encoded matrix. We use word level attention instead of max pooling, since attention mechanism can determine the importance of individual encoded word in each row of INLINEFORM0 . Let INLINEFORM1 denotes the attention vector (column vector), INLINEFORM2 denotes the filter that gives each element in the row of INLINEFORM3 a weight. The following equations shows the attention operation, which is also illustrated in figure FIGREF15 . DISPLAYFORM0 DISPLAYFORM1 ", "The softmax function takes a vector INLINEFORM0 as input and outputs a vector, DISPLAYFORM0 ", " INLINEFORM0 denotes the feature vector captured by this layer. Several approaches BIBREF12 , BIBREF17 use this vector and softmax classifier for classification. Inspired by BIBREF23 we propose the sentence level attention to combine the information of other sentences for a improved DDI classification." ], [ "The previous layers captures the features only from the given sentence. However, other sentences may contains informations that contribute to the understanding of this sentence. It is reasonable to look over other relevant instances when determine two drugs' interaction from the given sentence. In our implementation, the instances that have the same drug pair are believed to be relevant. The relevant instances set is denoted by INLINEFORM0 , where INLINEFORM1 is the sentence feature vector. INLINEFORM2 stands for how well the instance INLINEFORM3 matches its DDI INLINEFORM4 (Vector representation of a specific DDI). INLINEFORM5 is a diagonal attention matrix, multiplied by which the feature vector INLINEFORM6 can concentrate on those most representative features. DISPLAYFORM0 DISPLAYFORM1 ", " INLINEFORM0 is the softmax result of INLINEFORM1 . The final sentence representation is decided by all of the relevant sentences' feature vector, as Equation EQREF24 shows. DISPLAYFORM0 ", "Note that the set INLINEFORM0 is gradually growing as new sentence with the same drugs pairs is found when training. An instance INLINEFORM1 is represented by INLINEFORM2 before sentence level attention. The sentence level attention layer finds the set INLINEFORM3 , instances in which have the same drug pair as in INLINEFORM4 , and put INLINEFORM5 in INLINEFORM6 . Then the final sentence representation INLINEFORM7 is calculated in this layer." ], [ "A given sentence INLINEFORM0 is finally represented by the feature vector INLINEFORM1 . Then we feed it to a softmax classifier. Let INLINEFORM2 denotes the set of all kinds of DDI. The output INLINEFORM3 is the probabilities of each class INLINEFORM4 belongs. DISPLAYFORM0 ", "We use cross entropy cost function and INLINEFORM0 regularization as the optimization objective. For INLINEFORM1 -th instance, INLINEFORM2 denotes the one-hot representation of it's label, where the model outputs INLINEFORM3 . The cross entropy cost is: DISPLAYFORM0 ", "For a mini-batch INLINEFORM0 , the optimization objective is: DISPLAYFORM0 ", "All parameters in this model is: DISPLAYFORM0 ", "We optimize the parameters of objective function INLINEFORM0 with Adam BIBREF24 , which is a variant of mini-batch stochastic gradient descent. During each train step, the gradient of INLINEFORM1 is calculated. Then INLINEFORM2 is adjusted according to the gradient. After the end of training, we have a model that is able to predict two drugs' interactions when a sentence about these drugs is given." ], [ "The model is trained for DDI classification. The parameters in list INLINEFORM0 are tuned during the training process. Given a new sentence with two drugs, we can use this model to classify the DDI type.", "The DDI prediction follows the procedure described in Section SECREF6 - SECREF26 . The given sentence is eventually represented by feature vector INLINEFORM0 . Then INLINEFORM1 is classified to a specific DDI type with a softmax classifier. In next section, we will evaluate our model's DDI prediction performance and see the advantages and shortcomings of our model." ], [ "We use the DDI corpus of the 2013 DDIExtraction challenge BIBREF16 to train and test our model. The DDIs in this corpus are classified as five types. We give the definitions of these types and their example sentences, as shown in table TABREF4 . This standard dataset is made up of training set and testing set. We use the same metrics as in other drug-drug interaction extraction literature BIBREF11 , BIBREF10 , BIBREF25 , BIBREF9 , BIBREF8 , BIBREF12 : the overall precision, recall, and F1 score on testing set. INLINEFORM0 denotes the set of {False, Mechanism, Effect, Advise, Int}. The precision and recall of each INLINEFORM1 are calculated by DISPLAYFORM0 DISPLAYFORM1 ", "Then the overall precision, recall, and F1 score are calculated by DISPLAYFORM0 ", "Besides, we evaluate the captured feature vectors with t-SNE BIBREF26 , a visualizing and intuitive way to map a high dimensional vector into a 2 or 3-dimensional space. If the points in a low dimensional space are easy to be split, the feature vectors are believed to be more distinguishable." ], [ "We use TensorFlow BIBREF27 r0.11 to implement the proposed model. The input of each word is an ordered triple (word, relative distance from drug1, relative distance from drug2). The sentence, which is represented as a matrix, is fed to the model. The output of the model is a INLINEFORM0 -dimensional vector representing the probabilities of being corresponding DDI. It is the network, parameters, and hyperparameters which decides the output vector. The network's parameters are adjusted during training, where the hyperparameters are tuned by hand. The hyperparameters after tuning are as follows. The word embedding's dimension INLINEFORM1 , the position embedding's dimension INLINEFORM2 , the hidden state's dimension INLINEFORM3 , the probability of dropout INLINEFORM4 , other hyperparameters which are not shown here are set to TensorFlow's default values.", "The word embedding is initialized by pre-trained word vectors using GloVe BIBREF28 , while other parameters are initialized randomly. During each training step, a mini-batch (the mini-batch size INLINEFORM0 in our implementation) of sentences is selected from training set. The gradient of objective function is calculated for parameters updating (See Section SECREF26 ).", "Figure FIGREF32 shows the training process. The objective function INLINEFORM0 is declining as the training mini-batches continuously sent to the model. As the testing mini-batches, the INLINEFORM1 function is fluctuating while its overall trend is descending. The instances in testing set are not participated in training so that INLINEFORM2 function is not descending so fast. However, training and testing instances have similar distribution in sample space, causing that testing instances' INLINEFORM3 tends to be smaller along with the training process. INLINEFORM4 has inverse relationship with the performance measurement. The F1 score is getting fluctuating around a specific value after enough training steps. The reason why fluctuating range is considerable is that only a tiny part of the whole training or testing set has been calculated the F1 score. Testing the whole set during every step is time consuming and not necessary. We will evaluate the model on the whole testing set in Section SECREF47 ." ], [ "We save our model every 100 step and predict all the DDIs of the instances in the testing set. These predictions' F1 score is shown in figure FIGREF40 . To demonstrate the sentence level attention layer is effective, we drop this layer and then directly use INLINEFORM0 for softmax classification (See figure FIGREF15 ). The result is shown with “RNN + dynamic word embedding + ATT” curve, which illustrates that the sentence level attention layer contributes to a more accurate model.", "Whether a dynamic or static word embedding is better for a DDI extraction task is under consideration. Nguyen et al. BIBREF21 shows that updating word embedding at the time of other parameters being trained makes a better performance in relation extraction task. We let the embedding be static when training, while other conditions are all the same. The “RNN + static word embedding + 2ATT” curve shows this case. We can draw a conclusion that updating the initialized word embedding trains more suitable word vectors for the task, which promotes the performance.", "We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated.", "To evaluate the features our model captured, we employ scikit-learn BIBREF29 's t-SNE class to map high dimensional feature vectors to 2-dimensional vectors, which can be depicted on a plane. We depict all the features of the instances in testing set, as shown in figure FIGREF41 . The RNN model using dynamic word embedding and 2 layers of attention is the most distinguishable one. Unfortunately, the classifier can not classify all the instances into correct classes. Comparing table TABREF46 with figure UID44 , both of which are from the best performed model, we can observe some conclusions. The “Int” DDIs are often misclassified as “Effect”, for the reason that some of the “Int” points are in the “Effect” cluster. The “Effect” points are too scattered so that plenty of “Effect” DDIs are classified to other types. The “Mechanism” points are gathered around two clusters, causing that most of the “mechanism” DDIs are classified to two types: “False” and “Mechanism”. In short, the visualizability of feature mapping gives better explanations for the prediction results and the quality of captured features." ], [ "To conclude, we propose a recurrent neural network with multiple attention layers to extract DDIs from biomedical text. The sentence level attention layer, which combines other sentences containing the same drugs, has been added to our model. The experiments shows that our model outperforms the state-of-the-art DDI extraction systems. Task relevant word embedding and two attention layers improved the performance to some extent.", "The imbalance of the classes and the ambiguity of semantics cause most of the misclassifications. We consider that instance generation using generative adversarial networks would cover the instance shortage in specific category. It is also reasonable to use distant supervision learning (which utilize other relevant material) for knowledge supplement and obtain a better performed DDI extraction system." ], [ "This work is supported by the NSFC under Grant 61303191, 61303190, 61402504, 61103015." ] ], "section_name": [ "Introduction", "Related Work", "Proposed Model", "Preprocessing", "Embedding Layer", "Bidirectional RNN Encoding Layer", "Word Level Attention", "Sentence Level Attention", "Classification and Training", "DDI Prediction", "Datasets and Evaluation Metrics", "Hyperparameter Settings and Training", "Experimental Results", "Conclusion and Future Work", "Acknowledgment" ] }
{ "answers": [ { "annotation_id": [ "a7063c73663ca3470b2f8c60c0a294efee32a10b" ], "answer": [ { "evidence": [ "The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model." ], "extractive_spans": [ "contains thousands of XML files, each of which are constructed by several records" ], "free_form_answer": "", "highlighted_evidence": [ "The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "ba9d206b09685047828f3f50ccbd464fbadee940" ], "answer": [ { "evidence": [ "We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated." ], "extractive_spans": [], "free_form_answer": "Answer with content missing: (Table II) Proposed model has F1 score of 0.7220 compared to 0.7148 best state-state-of-the-art result.", "highlighted_evidence": [ "We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2e5e8cf9a787c14ed9bdbdf552c24a103e2ff467" ], "answer": [ { "evidence": [ "We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated." ], "extractive_spans": [], "free_form_answer": "Answer with content missing: (Table II) Proposed model has F1 score of 0.7220.", "highlighted_evidence": [ "We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "938e10deafaf39e71819eaf3d98e10b47f81b839" ], "answer": [ { "evidence": [ "In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.", "Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods." ], "extractive_spans": [ "Chowdhury BIBREF14 and Thomas et al. BIBREF11", "FBK-irst BIBREF10", "Liu et al. BIBREF9", "Sahu et al. BIBREF12" ], "free_form_answer": "", "highlighted_evidence": [ "Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.", "Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods.", " Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "How big is the evaluated dataset?", "By how much does their model outperform existing methods?", "What is the performance of their model?", "What are the existing methods mentioned in the paper?" ], "question_id": [ "b42323d60827ecf0d9e478c9a31f90940cfae975", "1a69696034f70fb76cd7bb30494b2f5ab97e134d", "9a596bd3a1b504601d49c2bec92d1592d7635042", "1ba28338d3f993674a19d2ee2ec35447e361505b" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "TABLE I THE DDI TYPES AND CORRESPONDING EXAMPLES", "Fig. 1. Partial records in DDI corpus", "Fig. 2. The bidirectional recurrent neural network with multiple attentions", "Fig. 3. The Gated Recurrent Unit", "Fig. 4. The objective function and F1 in the train process", "Fig. 5. The F1 scores on the whole testing set", "TABLE II PERFORMANCE COMPARISON WITH OTHER APPROACHES", "Fig. 6. The features which mapped to 2D", "TABLE III PREDICTION RESULTS" ], "file": [ "2-TableI-1.png", "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Figure4-1.png", "5-Figure5-1.png", "6-TableII-1.png", "6-Figure6-1.png", "6-TableIII-1.png" ] }
[ "By how much does their model outperform existing methods?", "What is the performance of their model?" ]
[ [ "1705.03261-Experimental Results-2" ], [ "1705.03261-Experimental Results-2" ] ]
[ "Answer with content missing: (Table II) Proposed model has F1 score of 0.7220 compared to 0.7148 best state-state-of-the-art result.", "Answer with content missing: (Table II) Proposed model has F1 score of 0.7220." ]
415
2002.08899
Compositional Neural Machine Translation by Removing the Lexicon from Syntax
The meaning of a natural language utterance is largely determined from its syntax and words. Additionally, there is evidence that humans process an utterance by separating knowledge about the lexicon from syntax knowledge. Theories from semantics and neuroscience claim that complete word meanings are not encoded in the representation of syntax. In this paper, we propose neural units that can enforce this constraint over an LSTM encoder and decoder. We demonstrate that our model achieves competitive performance across a variety of domains including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. In these cases, our model outperforms the standard LSTM encoder and decoder architecture on many or all of our metrics. To demonstrate that our model achieves the desired separation between the lexicon and syntax, we analyze its weights and explore its behavior when different neural modules are damaged. When damaged, we find that the model displays the knowledge distortions that aphasics are evidenced to have.
{ "paragraphs": [ [ "Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted." ], [ "BIBREF0 BIBREF0 showed that Broca's aphasics were able to understand “the apple that the boy is eating is red” with significantly higher accuracy than “the cow that the monkey is scaring is yellow,” along with similar pairs. The critical difference between these sentences is that, due to semantic constraints from the words, the first can be understood if it is presented as a set of words. The second cannot. This experiment provides evidence that the rest of the language neurons in the brain (largely Wernicke's area) can yield an understanding of word meanings but not how words are arranged. This also suggests that Broca's area builds a representation of the syntax.", "In the same study, Wernicke's aphasics performed poorly regardless of the sentence type. This provides evidence that Broca's area cannot yield an understanding of word meanings.", "Taken together, the two experiments support the theory that Broca's area creates a representation of the syntax without encoding complete word meanings. These other lexical aspects are represented separately in Wernicke's area, which does not encode syntax." ], [ "A tenet of generative grammar theories is that different words can share the same syntactic category BIBREF1. It is possible, for example, to know that the syntax for an utterance is a noun phrase that is composed of a determiner and a noun, followed by a verb phrase that is composed of a verb. One can know this without knowing the words. This also means that there are aspects of a word's meaning that the syntax does not determine; by definition, these aspects are invariant to word arrangement." ], [ "In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement.", "With the exception of the equations that we list below, the encoding and decoding follows standard paradigms BIBREF2. The input at a time step to the LSTM encoder is a vector embedding for the input token. The final hidden and cell states of the encoder are the starting hidden and cell states of the LSTM decoder. The decoder does not take tokens as inputs; it decodes by relying solely on its hidden and cell states. The $t$th output, $o_t$, from the decoder is Softmax$(W(h_t))$, where $W$ is a fully connected layer and $h_t$ is the decoder's $t$th hidden state. $o_t$ is the length of the output dictionary. $o_t$'s index with the highest value corresponds to the token choice. The encoder and decoder's weights are optimized with the negative log likelihood loss. The inputs to the loss function are the log of the model's output and the ground-truth at each time step. Below, we describe our modifications. l = ((w1, w2, , wm))", "la = (Wa2(ReLU(Wa1(GradReverse(hece)))))", "o't = l ot", "Where:", "$m$ is the number of input tokens.", "$w_i$ is a vector embedding for the $i$th input token, and is the length of the output dictionary. It is not the same embedding used by the encoder LSTM.", "$\\sigma $ is the Sigmoid function.", "$\\vee $ is the max pooling of a sequence of vectors of the same length. The weight at the output vector's $i$th index is the max of all input vectors' weights at their $i$th indices.", "$h_e$ and $c_e$ are the final hidden and cell states of the encoder.", "$W_{a_1}$ and $W_{a_2}$ are fully connected layers.", "$\\frown $ is concatenation.", "$\\odot $ is the elementwise product.", "GradReverse multiplies the gradient by a negative number upon backpropagation.", "$l$ is the output of the Lexicon Unit. Due to the max pooling, only one input token can be responsible for the value at a particular index of the output vector. The weights, $w_i$, are optimized solely by computing the binary cross entropy (BCE) loss between $l$ and the indicator vector where the $k$th element is 1 if the $k$th token in the output dictionary is in the output and 0 otherwise. This procedure forces a $w_i$ to represent the output tokens that are associated with its respective input token, without relying on aggregated contributions from the presence of several input tokens, and independently of the input word order.", "$l_a$ is the output of the Lexicon-Adversary Unit. Its weights are optimized according to the BCE loss with $l$ as the target. This means that $l_a$ is the Lexicon-Adversary Unit's approximation of $l$. Because $h_e$ and $c_e$ are passed through a gradient reversal layer, the LSTM encoder is regularized to produce a representation that does not include information from $l$. Consequently, the LSTM decoder does not have this information either.", "$o^{\\prime }_t$ is the $t$th output of our model. It can be converted to a token by finding the index with the highest weight. It is the result of combining $l$ via an elementwise product with the information from the regularized decoder.", "The recurrent encoder and decoder are the only modules that can represent the syntax, but they lack the expressivity to encode all potential aspects of word meaning. So, they are not always capable of producing a theoretically denied representation by giving all words their own syntactic category. The Lexicon Unit can represent these missing lexical aspects, but it lacks the expressivity to represent the syntax. See Figure FIGREF3 for the model." ], [ "We used BIBREF4's BIBREF4 small diagnostic, the Geoquery semantic parsing dataset BIBREF5, the Wall Street Journal syntactic parsing dataset of sentences up to length 10 BIBREF6, and the Tatoeba BIBREF7 English to Chinese translation dataset processed by BIBREF8 BIBREF8.", "To avoid the biases that can be introduced with hyperparameter tuning, we used the same hyperparameters with every model on every domain. These were chosen arbitrarily and kept after they enabled all models to reach a similar train accuracy (typically, close to 100 percent) and after they enabled all models to achieve a peak validation performance and then gradually yield worse validation scores. The hyperparameters are as follows. LSTM hidden size = 300, Lexicon Unit batch size = 1, batch size for other modules = 30, epoch to stop training the Lexicon Unit and start training other modules = 30, epoch to stop training = 1000, and Lexicon-Adversary Unit hidden size = 1000. The optimizer used for the Lexicon Unit was a sparse implementation of Adam BIBREF9 with a learning rate of 0.1 and otherwise the default PyTorch settings BIBREF10. In the other cases it was Adam BIBREF9 with the default PyTorch settings BIBREF10. The gradient through the encoder from the adversary's gradient reversal layer is multiplied by -0.0001. Additionally, the validation score is calculated after each train epoch and the model with the best is tested. To compute the Lexicon Unit to use, we measure its loss (BCE) on the validation set. Unless otherwise stated, we use the mean number of exact matches as the validation metric for the full model.", "To judge overall translation performance, we compared the LLA-LSTM encoder and decoder with the standard LSTM encoder and decoder. We also compared our model with one that does not have the adversary but is otherwise identical. The LLA-LSTM model shows improvements over the standard model on many or all of the metrics for every naturalistic domain. Many of the improvements over the other models are several percentage points. In the few scenarios where the LLA-LSTM model does not improve upon the standard model, the discrepancy between the models is small. The discrepancy is also small when the LLA-LSTM model with no adversary performs better than the LLA-LSTM model. Table TABREF4 displays the test results across the domains.", "Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision." ], [ "The first experiment by BIBREF4 BIBREF4 included a dataset of 14 training pairs and 10 test pairs. In the dataset, an input is a sequence of words from an artificial language created by the authors. An output is a sequence of colored dots. Because the dataset is so small, we use the train set as the validation set. The input and output dictionary are 7 and 4 words, respectively (not including the stop, “$<s>$,” token). In their paper, the authors argue that it is clear that the words have meanings. Four of the words correspond to unique output tokens, and three of them correspond to functions of the output tokens (for example, repeating the same dot three times). The dataset showcases the contrast between human and standard neural network responses. Their paper shows that humans had high accuracy on the test set, whereas standard neural models scored essentially zero exact matches.", "The LLA-LSTM model that we tested appears to achieve only insignificantly higher results in Table TABREF4. However, it has learned, from just 14 training examples, how to map some of the words to BIBREF4's interpretation of their context-invariant meanings. This is shown in Figure FIGREF6 (a). In the figure, “dax,” “lug,” “wif,” and “zup” are interpreted correctly to mean “r,” “g,” “b,” and “y,” respectively. Here, the letters correspond to the types of unique dots, which are red, green, blue, and yellow, respectively. The other words, “fep,” “kiki,” and “blicket,” are taken by BIBREF4 to have functional meanings, and so are correctly not associated strongly with any of the output tokens. The exceptions are two erroneous associations between “kiki” and blue and “blicket” and green. Also, every sentence has a stop token, so the LLA units learned that the context-invariant meanings of each word include it. The LLA units can handle cases where a word corresponds to multiple output tokens, and the output tokens need not be monolithic in the output sequence. As shown in tests from all of the other domains, these output token correspondences may or may not be relevant depending on the specific context of a word, but the recurrent component of the architecture is capable of determining which to use." ], [ "Geoquery (GEO) is a dataset where an input is an English geography query and the corresponding output is a parse that a computer could use to look up the answer in a database BIBREF5. We used the standard test set of 250 pairs from BIBREF12 BIBREF12. The remaining data were randomly split into a validation set of 100 pairs and a train set of 539 pairs. We tokenized the input data by splitting on the words and removing punctuation. We tokenized the output data by removing commas and splitting on words, parentheses, and variables. There are 283 tokens in the input dictionary and 177 tokens in the output dictionary, respectively.", "Figure FIGREF6 (b) shows some weights for four input words, which are all relevant to the inputs. Many of the weights correspond directly to the correct predicates. Other tokens have high weights because they are typically important to any parse. These are parentheses, variables (A, B, C, and D), the “answer” token, and the stop token." ], [ "The Wall Street Journal portion of the Penn Treebank is a dataset where English sentences from The Wall Street Journal are paired with human-generated phrase parses BIBREF6. We use the test, validation, and train set from BIBREF13's BIBREF13 paper. For efficiency, we only use sentences that have 10 or fewer words, lowercase all words, and modify BIBREF13's output data so that left parentheses are paired with their corresponding nonterminal and right parentheses are paired with their corresponding terminal. The input and output data were both tokenized by splitting where there is a space. The test, validation, and train set are 398, 258, and 6007 pairs, respectively. There are 9243 tokens in the input dictionary and 9486 tokens in the output dictionary.", "Figure FIGREF6 (c) shows some weights for four input words. They all highlight the relevant terminal, and syntactic categories that are usually associated with that word. The associated categories typically are either those of that word, the phrases headed by the category of that word, or those that select or are selected by that word. The relevant nonterminal terminology is as follows BIBREF6: “(in” is a preposition or subordinating conjunction, “(np” is a noun phrase, “(pp” is a prepositional phrase, “(np-subj” is a noun phrase with a surface subject marking, “(vp” is a verb phrase, “(vbn” is a verb in the past participle, “(adjp” is an adjective phrase, “(vbp” is a non-3rd person singular present verb, “(prp” is a personal pronoun, “(rb” is an adverb, “(sq” is the main clause of a wh-question, or it indicates an inverted yes or no question, and “(s” is the root." ], [ "The Tatoeba BIBREF7 English to Chinese translation dataset, processed by BIBREF8 BIBREF8, is a product of a crowdsourced effort to translate sentences of a user's choice into another language. The data were split randomly into a test, validation, and train set of 1500, 1500, and 18205 pairs, respectively. The English was tokenized by splitting on punctuation and words. The Chinese was tokenized by splitting on punctuation and characters. There are 6919 and 3434 tokens in the input and output dictionary, respectively. There are often many acceptable outputs when translating one natural language to another. As a result, we use the corpus-level BLEU score BIBREF11 to test models and score them on the validation set.", "Figure FIGREF6 (d) shows some weights for four input words. The listed Chinese words are an acceptable translation (depending on the context) and correspond roughly one-to-one with the English inputs. There are three exceptions. Although UTF8bsmi 么 is correctly given a low weight, its presence seems to be an error; it usually appears with another character to mean “what.” UTF8bsmi 我 們 and UTF8gbsn 我 们 typically translate to “we,” even though UTF8gbsn 我 alone translates to “me.” UTF8bsmi 們 is a plural marker and UTF8gbsn 们 is the same, but simplified; both versions evidently found their way into the dataset. The network has correctly learned to associate both Chinese words necessary to form the meaning of “we.” Also, UTF8gbsn 步散 means “walk,” but UTF8gbsn 散 generally does not appear alone to mean “walk.” Again, the network has learned to correctly associate all of the necessary characters with an input word.", "The results from this dataset in Table TABREF5 warrant a discussion for readers who do not know Chinese. As in the other cases, the model demonstrates the expected knowledge and lack thereof when different types of artificial aphasia are induced. The outputs are also productions that Chinese aphasics are expected to make per BIBREF0's BIBREF0 description. When the model is undamaged, its output is a correct translation for “I ate some fish.” When the model's LSTMs are damaged (simulating the conditions for Broca's aphasia), the production has incorrect syntax, and translates word for word to “eat I ...” These are both correct content words. When the model's Lexicon Unit is damaged (simulating the conditions for Wernicke's aphasia), the production has correct syntax. Impressively, the Chinese actually has the same syntax as the correct translation for “I ate some fish.” However, the content is nonsensical. The English translation is “I took the utterance.” Compared to the correct Mandarin translation, this incorrect one has the same subject and the same past-tense marker, UTF8gbsn 了 , for the verb. However it uses a different verb, object, and determiner." ], [ "There is evidence that generic attention mechanisms for machine translation already utilize the thesis that words have meanings that are independent of syntax. They learn correspondences between output tokens and a hidden state produced immediately after an encoder reads a particular input word BIBREF14. But the same mechanism is not at play in our model. Generic attention mechanisms do not necessarily impose a constraint on the input's syntax representation. Additionally, typical objective functions do not explicitly link input words with invariance in the output. Finally, one does not need to choose either LLA units or attention. LLA units can be incorporated into recurrent neural network systems with attention or other machine transduction architectures such as transformers BIBREF15.", "Recent work has incorporated some of the ideas in our paper into a neural machine translation model with the use of a specific attention mechanism BIBREF16. But the authors only demonstrate success on a single artificial dataset with a lexicon of about ten words, and they did not explore the effects of damaging parts of their model. Their optimization procedure also does not prohibit context-invariant lexical information from passing through the recurrent portion of their model. This incorrectly allows the possibility for a representation to be learned that gives every input word its own syntactic category. Lastly, their architecture provides a softer constraint than the one that we demonstrate, as information from several input words can aggregate and pass through the non-recurrent module that they use.", "There are other attempts to incorporate theories about human language to regularize a transduction model, but many have not scaled to the level of generality that the LLA units and some attention architectures show. These include synchronous grammars BIBREF17, data augmentation BIBREF18, Meta learning BIBREF19, and hard-coded maps or copying capabilities from input to output BIBREF20 BIBREF21. All require hard-coded rules that are often broken by the real world." ], [ "Neural and cognitive theories provide an imperative for computational models to understand human language by separating representations of word meanings from those of syntax. Using this constraint, we introduced new neural units that can provide this separation for the purpose of translating human languages. When added to an LSTM encoder and decoder, our units showed improvements in all of our experiment domains over the typical model. The domains were a small artificial diagnostic dataset, semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We also showed that the model learns a representation of human language that is similar to that of our brains. When damaged, the model displays the same knowledge distortions that aphasics do." ], [ "NOT INCLUDED IN DRAFT SUBMISSION", ".125in -" ] ], "section_name": [ "Introduction", "Neural Motivation", "Cognitive Motivation", "Model", "Experiments", "Experiments ::: Sequences of Color", "Experiments ::: Semantic Parsing", "Experiments ::: Syntactic Parsing", "Experiments ::: English to Chinese", "Related Work", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "a1b30589d21ca744e1d2d49c2af15b3149c12399" ], "answer": [ { "evidence": [ "In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "60dad4a02aa0b1bbe12962196aa5999f32bc0eaa" ], "answer": [ { "evidence": [ "Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia.", "Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2e7c7a4077d8a41a05cbbca9159b0bdb38c4007b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Results for artificial Wernicke’s and Broca’s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times." ], "extractive_spans": [], "free_form_answer": "Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information.", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results for artificial Wernicke’s and Broca’s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "da8fb8ae6dbafe1d31622b8cf75baffc1fe6bb7a" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Does having constrained neural units imply word meanings are fixed across different context?", "Do they perform a quantitative analysis of their model displaying knowledge distortions?", "How do they damage different neural modules?", "Which weights from their model do they analyze?" ], "question_id": [ "8ec94313ea908b6462e1f5ee809a977a7b6bdf01", "f052444f3b3bf61a3f226645278b780ebd7774db", "79ed71a3505cf6f5e8bf121fd7ec1518cab55cae", "74eb363ce30c44d318078cc1a46f8decf7db3ade" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: A graphic of our model. In addition to the terms described under our equations, we depict e terms which are embeddings for input tokens, for use by the LSTM encoder. The LSTM encoder is E, the LSTM decoder is D, the Lexical Unit is L and the Lexicon-Adversary Unit is LA. The dotted area contains the representation for the input’s syntax, adversarially regularized to not include context-invariant lexical information, l. The dashed area contains the representation for this lexical information, which does not have syntax knowledge. At every output step, l is combined with the decoder’s output via an elementwise product.", "Table 1: Comparison of the models on the test sets. The metrics used are mean precision (Prec.), mean recall (Rec.), mean accuracy (Acc.), mean number of exact matches (Exact.), and corpus-level BLEU (Papineni et al., 2002).", "Table 2: Results for artificial Wernicke’s and Broca’s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times.", "Figure 2: The learned σ(w)’s for some diverse input words; they are arbitrarily chosen, subject to the constraints listed in the text. Black is zero and white is one. (a) shows the results from the colors dataset; the input and output dictionaries are so small that all of the weights for all of the input words are shown. (b), (c), and (d) show results from the semantic parsing, syntactic parsing, and English to Mandarin translation datasets, respectively. Because there are so many words in the input dictionary, only four σ(w)’s are shown in each case. Because there are so many tokens in the output dictionary, if a weight within a σ(w) is zero when rounded to the nearest tenth’s place, then it is omitted. So, for each σ(w), approximately 170, 9480, and 3430 weights were omitted for the semantic parsing, syntactic parsing, and English to Mandarin translation cases, respectively." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "4-Table2-1.png", "5-Figure2-1.png" ] }
[ "How do they damage different neural modules?" ]
[ [ "2002.08899-4-Table2-1.png" ] ]
[ "Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information." ]
416
1803.07771
$\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis
Sentiment analysis is a key component in various text mining applications. Numerous sentiment classification techniques, including conventional and deep learning-based methods, have been proposed in the literature. In most existing methods, a high-quality training set is assumed to be given. Nevertheless, constructing a high-quality training set that consists of highly accurate labels is challenging in real applications. This difficulty stems from the fact that text samples usually contain complex sentiment representations, and their annotation is subjective. We address this challenge in this study by leveraging a new labeling strategy and utilizing a two-level long short-term memory network to construct a sentiment classifier. Lexical cues are useful for sentiment analysis, and they have been utilized in conventional studies. For example, polar and privative words play important roles in sentiment analysis. A new encoding strategy, that is, $\rho$-hot encoding, is proposed to alleviate the drawbacks of one-hot encoding and thus effectively incorporate useful lexical cues. We compile three Chinese data sets on the basis of our label strategy and proposed methodology. Experiments on the three data sets demonstrate that the proposed method outperforms state-of-the-art algorithms.
{ "paragraphs": [ [ "Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.", "Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .", "Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.", "Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.", "We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.", "Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.", "The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study." ], [ "Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.", "Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 ." ], [ "Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.", "Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1\",“0\", and “-1\", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad\" is 0.25, and the sentiment score of “it is good\" is 1. However, the score of “it is not so bad\" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis." ], [ "Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.", "Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.", "Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study." ], [ "This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues." ], [ "As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.", "The service is poor. The taste is good, but the rest is not so bad.", "The quality of the phone is good, but the appearance is just so-so.", "In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative\". By contrast, for another user who does not care about service, the label may be “positive\". Similarly, a user may label S2 as “positive\" if he cares about quality. Another user may label it as “negative\" if the conjunction “but\" attracts the user¡¯s attention more. Another user may label it as “neutral\" if they are concerned about quality and appearance.", "The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.", "Multiple annotators for a large number of data sets require a large budget.", "In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.", "A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.", "S1:", "S1.1: The service is poor", "S1.2: The taste is good", "S1.3: but the rest is not so bad.", "S2:", "S2.1: The quality of the phone is good", "S2.2: but the appearance is just so-so.", "Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage." ], [ "Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.", "LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0 ", "where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.", "When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0 ", "In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.", "The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0 ", "where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).", "Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).", "In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3 ", "where DISPLAYFORM0 ", "where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b)." ], [ "The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.", "For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.", "In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.", "The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.", "The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.", "The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0 ", "where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.", "Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.", "Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)\" instead of “neutral\" because some words, such as “the\", are unrelated to “sentiment\". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1 ", "In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad\") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0 ", "", "Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0 ", "where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.", "POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.", "The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3 ", "When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0 ", "where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.", "Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but\" and “moreover\" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.", "Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.", "When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0 ", "where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.", "Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .", "The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.", "The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.", "Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work." ], [ "The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.", " INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.", " Output: A trained two-level LSTM for sentiment classification.", " Steps:", "", "Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;", "Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;", "Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;", "Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;", "Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.", "The first-level and second-level LSTM networks consist of the final two-level LSTM.", "The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.", "This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.", "We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.", "The labels “+1\", “0.5\", and “0\" correspond to the three sentiment classes “positive\", “neutral\", and “negative\", respectively. The text data are labeled according to our two-stage labeling strategy.", "In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.", "In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive\". Samples with average scores located in [0, 0.4] are labeled as “negative\". Others are labeled as “neutral\". The details of the labeling results are shown in Table 1.", "All the training and test data and the labels are available online.", "In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.", "In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.", "In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.", "For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.", "The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .", "In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.", "CNN-C denotes the CNN with (Chinese) character embedding.", "CNN-W denotes the CNN with (Chinese) word embedding.", "CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.", "CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.", "Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.", "Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.", "Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.", "BOW denotes the conventional algorithm which is based of bag-of-words features.", "The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.", "In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.", "The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C\" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C\" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.", "On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.", "In addition, for the involved algorithms, most results achieved on “R+C\" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.", "The results also show that in the two-level LSTM, character embedding is more effective than word embedding.", "In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.", "The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.", "The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.", "Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.", "In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.", "The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.", "In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality\" and thus weaken the performance of the proposed two-level network.", "In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.", "In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.", "In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.", "In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.", "In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.", "In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.", "The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.", "High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.", "The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.", "The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data." ] ], "section_name": [ "Introduction", "Text Sentiment Analysis", "Lexion-based Sentiment Classification", "Deep Learning-based Sentiment Classification", "METHODOLOGY", "Two-stage Labeling", "Two-level LSTM", "Lexical Embedding", "The Learning Procedure" ] }
{ "answers": [ { "annotation_id": [ "a3988736b95d4fd1f9b4de1cc7addeaf1b4c7752" ], "answer": [ { "evidence": [ "In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.", "CNN-C denotes the CNN with (Chinese) character embedding.", "CNN-W denotes the CNN with (Chinese) word embedding.", "CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.", "CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.", "Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.", "Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.", "Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.", "BOW denotes the conventional algorithm which is based of bag-of-words features." ], "extractive_spans": [ "CNN-C", "CNN-W", "CNN-Lex-C", "CNN-Lex-W", "Bi-LSTM-C ", "Bi-LSTM-W", "Lex-rule", "BOW" ], "free_form_answer": "", "highlighted_evidence": [ "The involved algorithms are detailed as follows.\n\nCNN-C denotes the CNN with (Chinese) character embedding.\n\nCNN-W denotes the CNN with (Chinese) word embedding.\n\nCNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.\n\nCNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.\n\nBi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.\n\nBi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.\n\nLex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.\n\nBOW denotes the conventional algorithm which is based of bag-of-words features." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "891408ee280c2f91a90772fc559cc1a46bfdec37" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "4fabca496686708fd9391e356dedb87c87c85d36" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "e202a3feeca2862b84c6c1e57ed9f5ac28c08b4a" ], "answer": [ { "evidence": [ "We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.", "In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive\". Samples with average scores located in [0, 0.4] are labeled as “negative\". Others are labeled as “neutral\". The details of the labeling results are shown in Table 1.", "FLOAT SELECTED: TABLE 1 Details of the three data corpora. Each corpus consists of raw samples (sentences or paragraphs) and partitioned clauses (sub-sentences)." ], "extractive_spans": [], "free_form_answer": "Travel dataset contains 4100 raw samples, 11291 clauses, Hotel dataset contains 3825 raw samples, 11264 clauses, and the Mobile dataset contains 3483 raw samples and 8118 clauses", "highlighted_evidence": [ "We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". ", "The details of the labeling results are shown in Table 1.", "FLOAT SELECTED: TABLE 1 Details of the three data corpora. Each corpus consists of raw samples (sentences or paragraphs) and partitioned clauses (sub-sentences)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "2f18af0c145679596d0549f6af98712d21480ea9" ], "answer": [ { "evidence": [ "We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora." ], "extractive_spans": [], "free_form_answer": "User reviews written in Chinese collected online for hotel, mobile phone, and travel domains", "highlighted_evidence": [ "We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". All texts are about user reviews. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "570ee3b71cbd281cd921febd3670ed7866320316" ], "answer": [ { "evidence": [ "We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods." ], "extractive_spans": [], "free_form_answer": "They use a two-stage labeling strategy where in the first stage single annotators label a large number of short texts with relatively pure sentiment orientations and in the second stage multiple annotators label few text samples with mixed sentiment orientations", "highlighted_evidence": [ "First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "question": [ "What are the other models they compare to?", "What is the agreement value for each dataset?", "How many annotators participated?", "How long are the datasets?", "What are the sources of the data?", "What is the new labeling strategy?" ], "question_id": [ "0b9021cefca71081e617a362e7e3995c5f1d2a88", "6ad92aad46d2e52f4e7f3020723922255fd2b603", "4fdc707fae5747fceae68199851e3c3186ab8307", "2d307b43746be9cedf897adac06d524419b0720b", "fe90eec1e3cdaa41d2da55864c86f6b6f042a56c", "9d5df9022cc9eb04b9f5c5a9d8308a332ebdf50c" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1. A lexicon-based approach for sentiment classification.", "Figure 2. Two representative LSTM structures for text classification: a bi-directional LSTM (left), and a bi-directional LSTM with attention (right).", "Figure 3. The proposed two-level LSTM network.", "Figure 4. The histogram of the values in word embedding vectors. Most values are smaller than 1.", "Figure 5. The first-level LSTM with lexicon embedding in both the input and attention layers.", "Figure 6. The whole two-level LSTM network with lexicon embedding in both the input and attention layers.", "TABLE 1 Details of the three data corpora. Each corpus consists of raw samples (sentences or paragraphs) and partitioned clauses (sub-sentences).", "TABLE 2 Numbers of five types of key lexical words.", "TABLE 4 The classification accuracies of two-level LSTM without lexicon embedding.", "TABLE 3 The classification accuracies of existing algorithms on raw samples.", "Figure 7. Classification accuracies under different ρ values. #1-C and #1-W represent ρTl-LSTM-C and ρTl-LSTM-W on the first (travel) corpus, respectively; #2-C and #2-W represent ρTl-LSTM-C and ρTl-LSTMW on the second (hotel) corpus, respectively; #3-C and #3-W represent ρTl-LSTM-C and ρTl-LSTM-W on the third (hotel) corpus, respectively.", "TABLE 6 The accuracy improvement of two-level LSTM when lexicon embedding was used over those of two-level LSTM without lexicon embedding. The values of the last row are the accuracy improvement over the highest accuracies on each data corpus with existing algorithms.", "TABLE 7 The accuracies of ρTl-LSTM with different n values in ρ-hot encoding.", "TABLE 5 The classification accuracies of two-level LSTM with lexicon embedding.", "Figure 8. Classification accuracies under different proportions of polar words. #1-C and #1-W represent ρTl-LSTM-C and ρTl-LSTM-W on the first (travel) corpus, respectively; #2-C and #2-W represent ρTl-LSTMC and ρTl-LSTM-W on the second (hotel) corpus, respectively; #3-C and #3-W represent ρTl-LSTM-C and ρTl-LSTM-W on the third (hotel) corpus, respectively.", "Figure 10. Classification accuracies with and without conjunction in lexicon embedding. #1-C and #1-W represent ρTl-LSTM-C and ρTl-LSTMW on the first (travel) corpus, respectively; #2-C and #2-W represent ρTlLSTM-C and ρTl-LSTM-W on the second (hotel) corpus, respectively; #3-C and #3-W represent ρTl-LSTM-C and ρTl-LSTM-W on the third (hotel) corpus, respectively.", "Figure 9. Classification accuracies with and without POS in lexicon embedding. #1-C and #1-W represent ρTl-LSTM-C and ρTl-LSTM-W on the first (travel) corpus, respectively; #2-C and #2-W represent ρTlLSTM-C and ρTl-LSTM-W on the second (hotel) corpus, respectively; #3-C and #3-W represent ρTl-LSTM-C and ρTl-LSTM-W on the third (hotel) corpus, respectively." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "5-Figure3-1.png", "5-Figure4-1.png", "6-Figure5-1.png", "6-Figure6-1.png", "7-Table1-1.png", "7-Table2-1.png", "8-Table4-1.png", "8-Table3-1.png", "9-Figure7-1.png", "9-Table6-1.png", "9-Table7-1.png", "9-Table5-1.png", "10-Figure8-1.png", "10-Figure10-1.png", "10-Figure9-1.png" ] }
[ "How long are the datasets?", "What are the sources of the data?", "What is the new labeling strategy?" ]
[ [ "1803.07771-The Learning Procedure-13", "1803.07771-The Learning Procedure-16", "1803.07771-7-Table1-1.png" ], [ "1803.07771-The Learning Procedure-13" ], [ "1803.07771-Introduction-4" ] ]
[ "Travel dataset contains 4100 raw samples, 11291 clauses, Hotel dataset contains 3825 raw samples, 11264 clauses, and the Mobile dataset contains 3483 raw samples and 8118 clauses", "User reviews written in Chinese collected online for hotel, mobile phone, and travel domains", "They use a two-stage labeling strategy where in the first stage single annotators label a large number of short texts with relatively pure sentiment orientations and in the second stage multiple annotators label few text samples with mixed sentiment orientations" ]
418
1907.05403
Incrementalizing RASA's Open-Source Natural Language Understanding Pipeline
As spoken dialogue systems and chatbots are gaining more widespread adoption, commercial and open-sourced services for natural language understanding are emerging. In this paper, we explain how we altered the open-source RASA natural language understanding pipeline to process incrementally (i.e., word-by-word), following the incremental unit framework proposed by Schlangen and Skantze. To do so, we altered existing RASA components to process incrementally, and added an update-incremental intent recognition model as a component to RASA. Our evaluations on the Snips dataset show that our changes allow RASA to function as an effective incremental natural language understanding service.
{ "paragraphs": [ [ "There is no shortage of services that are marketed as natural language understanding (nlu) solutions for use in chatbots, digital personal assistants, or spoken dialogue systems (sds). Recently, Braun2017 systematically evaluated several such services, including Microsoft LUIS, IBM Watson Conversation, API.ai, wit.ai, Amazon Lex, and RASA BIBREF0 . More recently, Liu2019b evaluated LUIS, Watson, RASA, and DialogFlow using some established benchmarks. Some nlu services work better than others in certain tasks and domains with a perhaps surprising pattern: RASA, the only fully open-source nlu service among those evaluated, consistently performs on par with the commercial services.", "Though these services yield state-of-the-art performance on a handful of nlu tasks, one drawback to sds and robotics researchers is the fact that all of these nlu solutions process input at the utterance level; none of them process incrementally at the word-level. Yet, research has shown that humans comprehend utterances as they unfold BIBREF1 . Moreover, when a listener feels they are missing some crucial information mid-utterance, they can interject with a clarification request, so as to ensure they and the speaker are maintaining common ground BIBREF2 . Users who interact with sdss perceive incremental systems as being more natural than traditional, turn-based systems BIBREF3 , BIBREF4 , BIBREF5 , offer a more human-like experience BIBREF6 and are more satisfying to interact with than non-incremental systems BIBREF7 . Users even prefer interacting with an incremental sds when the system is less accurate or requires filled pauses while replying BIBREF8 or operates in a limited domain as long as there is incremental feedback BIBREF9 .", "In this paper, we report our recent efforts in making the RASA nlu pipeline process incrementally. We explain briefly the RASA framework and pipeline, explain how we altered the RASA framework and individual components (including a new component which we added) to allow it to process incrementally, then we explain how we evaluated the system to ensure that RASA works as intended and how researchers can leverage this tool." ], [ "RASA consists of nlu and core modules, the latter of which is akin to a dialogue manager; our focus here is on the nlu. The nlu itself is further modularized as pipelines which define how user utterances are processed, for example an utterance can pass through a tokenizer, named entity recognizer, then an intent classifier before producing a distribution over possible dialogue acts or intents. The pipeline and the training data are authorable (following a markdown representation; json format can also be used for the training data) allowing users to easily setup and run experiments in any domain as a standalone nlu component or as a module in a sds or chatbot. Importantly, RASA has provisions for authoring new components as well as altering existing ones.", "Figure FIGREF7 shows a schematic of a pipeline for three components. The context (i.e., training data) is passed to Component A which performs its training, then persists a trained model for that component. Then the data is passed through Component A as input for Component B which also trains and persists, and so on for Component C. During runtime, the persisted models are loaded into memory and together form the nlu module." ], [ "Our approach to making RASA incremental follows the incremental unit (iu) framework Schlangen2011 as has been done in previous work for dialogue processing toolkits BIBREF10 . We treat each module in RASA as an iu processing module and specifically make use of the ADD and REVOKE iu operations; for example, ADD when a new word is typed or recognized by a speech recognizer, and REVOKE if that word is identified as having been erroneously recognized in light of new information.", "By default, RASA components expect full utterances, not single words. In addition to the challenge of making components in the nlu pipeline process word-by-word, we encounter another important problem: there is no ready-made signal for the end of an utterance. To solve this, we added functionality to signal the end of an utterance; this signal can be triggered by any component, including the speech recognizer where it has traditionally originated via endpointing. With this flexibility, any component (or set of components) can make a more informed decision about when an utterance is complete (e.g., if a user is uttering installments, endpointing may occur, but the intent behind the user's installments is not yet complete; the decision as to when an utterance is complete can be made by the nlu or dialogue manager).", "Training RASA nlu proceeds as explained above (i.e., non-incrementally). For runtime, processing incrementally through the RASA pipeline is challenging because each component must have provisions for handling word-level input and must be able to handle ADD and REVOKE iu operations. Each component in a pipeline, for example, as depicted in Figure FIGREF7 , must operate in lock-step with each other where a word is ADDed to Component A which beings processing immediately, then ADDs its processing result to Component B, then Component B processes and passes output to Component C all before the next word is produced for Component A." ], [ "We now explain how we altered specific RASA components to make them work incrementally.", "The Message class in RASA nlu is the main message bus between components in the pipeline. Message follows a blackboard approach to passing information between components. For example, in a pipeline containing a tokenizer, intent classifier, and entity extractor, each of the components would store the tokens, intent class, and entities in the Message object, respectively. Our modifications to Message were minimal; we simply used it to store ius and corresponding edit types (i.e., ADD or REVOKE).", "In order to incrementalize RASA nlu, we extended the base Component to make an addition of a new component, IncrementalComponent. A user who defines their own IncrementalComponent understands the difference in functionality, notably in the parse method. At runtime, a non-incremental component expects a full utterance, whereas an incremental one expects only a single iu. Because non-incremental components expect the entire utterance, they have no need to save any internal state across process calls, and can clear any internal data at the end of the method. However, with incremental components, that workflow changes; each call to process must maintain its internal state, so that it can be updated as it receives new ius. Moreover, IncrementalComponents additionally have a new_utterance method. In non-incremental systems, the call to process implicitly signals that the utterance has been completed, and there is no need to store internal data across process calls, whereas incremental systems lose that signal as a result. The new_utterance method acts as that signal.", "The Interpreter class in RASA nlu is the main interface between user input (e.g., asr) and the series of components in the pipeline. On training, the Interpreter prepares the training data, and serially calls train on each of the components in the pipeline. Similarly, to process input, one uses the Interpreter’s parse method, where the Interpreter prepares the input (i.e., the ongoing utterance) and serially calls process on the components in the pipeline (analgous to left buffer updates in the iu framework). As a result of its design, we were able to leverage the Interpreter class for incremental processing, notably because of its use of a persistent Message object as a bus of communication between Components.", "As with our implementation of the IncrementalComponent class, we created the IncrementalInterpreter. The IncrementalInterpreter class adds two new methods:", "new_utterance", "parse_incremental", "The new_utterance method is fairly straightforward; it clears RASA nlu’s internal Message object that is shared between components, and calls each IncrementalComponent in the pipeline’s new_utterance method, signaling that the utterance has been completed, and for each component to clear their internal states. The parse_incremental method takes the iu from the calling input (e.g., asr), and appends it to a list of previous ius being stored in the Message object. After the iu has been added to the Message, the IncrementalInterpreter calls each component’s process method, where they can operate on the newest iu. This was intentionally designed to be generalizable, so that future incremental components can use different formats or edit types for their respective iu framework implementation." ], [ "With the incremental framework in place, we further developed a sample incremental component to test the functionality of our changes. For this, we used the Simple Incremental Update Model (sium) described in BIBREF11 . This model is a generative factored joint distribution, which uses a simple Bayesian update as new words are added. At each iu, a distribution of intents and entities are generated with confidence scores, and the intent can be classified at each step as the output with the highest confidence value. Entities on the other hand, can be extracted if their confidence exceeds a predetermined threshold.", "Following khouzaimi-laroche-lefevre:2014:W14-43), we incrementalizaed RASA's existing Tensorflow Embedding component for intent recognition as an incremental component. The pipeline consists of a whitespace tokenizer, scikit-learn Conditional Random Field (crf) entity extractor, Bag-of-Words featurizer, and lastly, a TensorFlow Neural Network for intent classification. To start with incrementalizing, we modified the whitespace tokenizer to work on word-level increments, rather than the entire utterance. For the crf entity extractor, we modified it to update the entities up to that point in the utterance with each process call, and then modified the Bag-of-Words featurizer to update its embeddings with each process call by vectorizing the individual word in the iu, and summing that vector with the existing embeddings. At each word iu increment, we treat the entire utterance prefix to that point as a full utterance as input to the Tensorflow Embeddings component, which returns a distribution over intents. This process is repeated until all words in the utterance have been added to the prefix. In this way, the component differs from sium in that it doesn't update its internal state; rather, it treats each prefix as a full utterance (i.e., so-called restart-incrementality)." ], [ "In this section, we explain a simple experiment we conducted to evaluate our work in incrementalizing RASA by using the update-incremental sium and restart-incremental tensorflow-embedding modules in a known nlu task." ], [ "To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. Our training data consisted of 700 utterances, across 7 different intents (AddToPlaylist, BookRestaurant, GetWeather, PlayMusic, RateBook, SearchCreativeWork, and SearchScreeningEvent). In order to test our implementation of incremental components, we initially benchmarked their non-incremental counterparts, and used that as a baseline for the incremental versions (to treat the sium component as non-incremental, we simply applied all words in each utterance to it and obtained the distribution over intents after each full utterance had been processed).", "We use accuracy of intent and entity recognition as our task and metric. To evaluate the components worked as intended, we then used the IncrementalInterpreter to parse the messages as individual ius. To ensure REVOKE worked as intended, we injected random incorrect words at a rate of 40%, followed by subsequent revokes, ensuring that an ADD followed by a revoke resulted in the same output as if the incorrect word had never been added. While we implemented both an update-incremental and a restart-incremental RASA nlu component, the results of the two cannot be directly compared for accuracy as the underlying models differ greatly (i.e., sium is generative, whereas Tensorflow Embedding is a discriminative neural network; moreover, sium was designed to work as a reference resolution component to physical objects, not abstract intents), nor are these results conducive to an argument of update- vs. restart-incremental approaches, as the underlying architecture of the models vary greatly." ], [ "The results of our evaluation can be found in Table TABREF14 . These results show that our incremental implementation works as intended, as the incremental and non-incremental version of each component yieled the same results. While there is a small variation between the F1 scores between the non-incremental and incremental components, 1% is well within a reasonable tolerance as there is some randomness in training the underlying model." ], [ "RASA nlu is a useful and well-evaluated toolkit for developing nlu components in sds and chatbot systems. We extended RASA by adding provisions for incremental processing generally, and we implemented two components for intent recognition that used update- and restart-incremental approaches. Our results show that the incrementalizing worked as expected. For ongoing and future work, we plan on developing an update-incremental counterpart to the Tensorflow Embeddings component that uses a recurrent neural network to maintain the state. We will further evaluate our work with incremental asr in live dialogue tasks. We will make our code available upon acceptance of this publication." ], [ "language: \"en\"", "pipeline:", "- name: \"intent_featurizer_count_vectors\"", "- name: \"intent_..._tensorflow_embedding\"", " intent_tokenization_flag: true", " intent_split_symbol: \"+\"", "" ] ], "section_name": [ "Introduction", "The RASA NLU Pipeline", "Incrementalizing RASA", "Incrementalizing RASA Components", "Incremental Intent Recognizer Components", "Experiment", "Data, Task, Metrics", "Results", "Conclusion", "Appendix" ] }
{ "answers": [ { "annotation_id": [ "2f6dbdd7c8cb2cd735a26a2b03eb344ab650cdb9" ], "answer": [ { "evidence": [ "To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. Our training data consisted of 700 utterances, across 7 different intents (AddToPlaylist, BookRestaurant, GetWeather, PlayMusic, RateBook, SearchCreativeWork, and SearchScreeningEvent). In order to test our implementation of incremental components, we initially benchmarked their non-incremental counterparts, and used that as a baseline for the incremental versions (to treat the sium component as non-incremental, we simply applied all words in each utterance to it and obtained the distribution over intents after each full utterance had been processed).", "We use accuracy of intent and entity recognition as our task and metric. To evaluate the components worked as intended, we then used the IncrementalInterpreter to parse the messages as individual ius. To ensure REVOKE worked as intended, we injected random incorrect words at a rate of 40%, followed by subsequent revokes, ensuring that an ADD followed by a revoke resulted in the same output as if the incorrect word had never been added. While we implemented both an update-incremental and a restart-incremental RASA nlu component, the results of the two cannot be directly compared for accuracy as the underlying models differ greatly (i.e., sium is generative, whereas Tensorflow Embedding is a discriminative neural network; moreover, sium was designed to work as a reference resolution component to physical objects, not abstract intents), nor are these results conducive to an argument of update- vs. restart-incremental approaches, as the underlying architecture of the models vary greatly." ], "extractive_spans": [], "free_form_answer": "The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset", "highlighted_evidence": [ "To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. ", "We use accuracy of intent and entity recognition as our task and metric. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "" ], "paper_read": [ "" ], "question": [ "How are their changes evaluated?" ], "question_id": [ "14eb2b89ba39e56c52954058b6b799a49d1b74bf" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "" ], "topic_background": [ "" ] }
{ "caption": [ "Figure 1: The lifecycle of RASA components (from https://rasa.com/docs/nlu/)", "Table 1: Test data results of non-incremental TensorFlow, restart-incremental TensorFlow, non-incremental SIUM, and update-incremental SIUM." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png" ] }
[ "How are their changes evaluated?" ]
[ [ "1907.05403-Data, Task, Metrics-0", "1907.05403-Data, Task, Metrics-1" ] ]
[ "The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset" ]
420
1707.06945
Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation
Existing approaches to automatic VerbNet-style verb classification are heavily dependent on feature engineering and therefore limited to languages with mature NLP pipelines. In this work, we propose a novel cross-lingual transfer method for inducing VerbNets for multiple languages. To the best of our knowledge, this is the first study which demonstrates how the architectures for learning word embeddings can be applied to this challenging syntactic-semantic task. Our method uses cross-lingual translation pairs to tie each of the six target languages into a bilingual vector space with English, jointly specialising the representations to encode the relational information from English VerbNet. A standard clustering algorithm is then run on top of the VerbNet-specialised representations, using vector dimensions as features for learning verb classes. Our results show that the proposed cross-lingual transfer approach sets new state-of-the-art verb classification performance across all six target languages explored in this work.
{ "paragraphs": [ [ "Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .", "Lexical resources which capture the variability of verbs are instrumental for many Natural Language Processing (NLP) applications. One of the richest verb resources currently available for English is VerbNet BIBREF3 , BIBREF4 . Based on the work of Levin Levin:1993book, this largely hand-crafted taxonomy organises verbs into classes on the basis of their shared syntactic-semantic behaviour. Providing a useful level of generalisation for many NLP tasks, VerbNet has been used to support semantic role labelling BIBREF5 , BIBREF6 , semantic parsing BIBREF7 , word sense disambiguation BIBREF8 , discourse parsing BIBREF9 , information extraction BIBREF10 , text mining applications BIBREF11 , BIBREF12 , research into human language acquisition BIBREF13 , and other tasks.", "This benefit for English NLP has motivated the development of VerbNets for languages such as Spanish and Catalan BIBREF14 , Czech BIBREF15 , and Mandarin BIBREF16 . However, end-to-end manual resource development using Levin's methodology is extremely time consuming, even when supported by translations of English VerbNet classes to other languages BIBREF17 , BIBREF18 . Approaches which aim to learn verb classes automatically offer an attractive alternative. However, existing methods rely on carefully engineered features that are extracted using sophisticated language-specific resources BIBREF19 , BIBREF17 , BIBREF20 , ranging from accurate parsers to pre-compiled subcategorisation frames BIBREF21 , BIBREF22 , BIBREF23 . Such methods are limited to a small set of resource-rich languages.", "It has been argued that VerbNet-style classification has a strong cross-lingual element BIBREF24 , BIBREF2 . In support of this argument, Majewska:2017lre have shown that English VerbNet has high translatability across different, even typologically diverse languages. Based on this finding, we propose an automatic approach which exploits readily available annotations for English to facilitate efficient, large-scale development of VerbNets for a wide set of target languages.", "Recently, unsupervised methods for inducing distributed word vector space representations or word embeddings BIBREF25 have been successfully applied to a plethora of NLP tasks BIBREF26 , BIBREF27 , BIBREF28 . These methods offer an elegant way to learn directly from large corpora, bypassing the feature engineering step and the dependence on mature NLP pipelines (e.g., POS taggers, parsers, extraction of subcategorisation frames). In this work, we demonstrate how these models can be used to support automatic verb class induction. Moreover, we show that these models offer the means to exploit inherent cross-lingual links in VerbNet-style classification in order to guide the development of new classifications for resource-lean languages. To the best of our knowledge, this proposition has not been investigated in previous work.", "There has been little work on assessing the suitability of embeddings for capturing rich syntactic-semantic phenomena. One challenge is their reliance on the distributional hypothesis BIBREF29 , which coalesces fine-grained syntactic-semantic relations between words into a broad relation of semantic relatedness (e.g., coffee:cup) BIBREF30 , BIBREF31 . This property has an adverse effect when word embeddings are used in downstream tasks such as spoken language understanding BIBREF32 , BIBREF33 or dialogue state tracking BIBREF34 , BIBREF35 . It could have a similar effect on verb classification, which relies on the similarity in syntactic-semantic properties of verbs within a class. In summary, we explore three important questions in this paper:", "(Q1) Given their fundamental dependence on the distributional hypothesis, to what extent can unsupervised methods for inducing vector spaces facilitate the automatic induction of VerbNet-style verb classes across different languages?", "(Q2) Can one boost verb classification for lower-resource languages by exploiting general-purpose cross-lingual resources such as BabelNet BIBREF36 , BIBREF37 or bilingual dictionaries such as PanLex BIBREF38 to construct better word vector spaces for these languages?", "(Q3) Based on the stipulated cross-linguistic validity of VerbNet-style classification, can one exploit rich sets of readily available annotations in one language (e.g., the full English VerbNet) to automatically bootstrap the creation of VerbNets for other languages? In other words, is it possible to exploit a cross-lingual vector space to transfer VerbNet knowledge from a resource-rich to a resource-lean language?", "To investigate Q1, we induce standard distributional vector spaces BIBREF39 , BIBREF40 from large monolingual corpora in English and six target languages. As expected, the results obtained with this straightforward approach show positive trends, but at the same time reveal its limitations for all the languages involved. Therefore, the focus of our work shifts to Q2 and Q3. The problem of inducing VerbNet-oriented embeddings is framed as vector space specialisation using the available external resources: BabelNet or PanLex, and (English) VerbNet. Formalised as an instance of post-processing semantic specialisation approaches BIBREF41 , BIBREF34 , our procedure is steered by two sets of linguistic constraints: 1) cross-lingual (translation) links between languages extracted from BabelNet (targeting Q2); and 2) the available VerbNet annotations for a resource-rich language. The two sets of constraints jointly target Q3.", "The main goal of vector space specialisation is to pull examples standing in desirable relations, as described by the constraints, closer together in the transformed vector space. The specialisation process can capitalise on the knowledge of VerbNet relations in the source language (English) by using translation pairs to transfer that knowledge to each of the target languages. By constructing shared bilingual vector spaces, our method facilitates the transfer of semantic relations derived from VerbNet to the vector spaces of resource-lean target languages. This idea is illustrated by Fig. FIGREF2 .", "Our results indicate that cross-lingual connections yield improved verb classes across all six target languages (thus answering Q2). Moreover, a consistent and significant boost in verb classification performance is achieved by propagating the VerbNet-style information from the source language (English) to any other target language (e.g., Italian, Croatian, Polish, Finnish) for which no VerbNet-style information is available during the fine-tuning process (thus answering Q3). We report state-of-the-art verb classification performance for all six languages in our experiments. For instance, we improve the state-of-the-art F-1 score from prior work from 0.55 to 0.79 for French, and from 0.43 to 0.74 for Brazilian Portuguese." ], [ "Our departure point is a state-of-the-art specialisation model for fine-tuning vector spaces termed Paragram BIBREF49 . The Paragram procedure injects similarity constraints between word pairs in order to make their vector space representations more similar; we term these the Attract constraints. Let INLINEFORM0 be the vocabulary consisting of the source language and target language vocabularies INLINEFORM1 and INLINEFORM2 , respectively. Let INLINEFORM3 be the set of word pairs standing in desirable lexical relations; these include: 1) verb pairs from the same VerbNet class (e.g. (en_transport, en_transfer) from verb class send-11.1); and 2) the cross-lingual synonymy pairs (e.g. (en_peace, fi_rauha)). Given the initial distributional space and collections of such Attract pairs INLINEFORM4 , the model gradually modifies the space to bring the designated word vectors closer together, working in mini-batches of size INLINEFORM5 . The method's cost function can be expressed as:", " DISPLAYFORM0 ", "The first term of the method's cost function (i.e., INLINEFORM0 ) pulls the Attract examples INLINEFORM1 closer together (see Fig. FIGREF2 for an illustration). INLINEFORM2 refers to the current mini-batch of Attract constraints. This term is expressed as follows:", " DISPLAYFORM0 ", " INLINEFORM0 is the standard rectified linear unit or the hinge loss function BIBREF50 , BIBREF51 . INLINEFORM1 is the “attract” margin: it determines how much vectors of words from Attract constraints should be closer to each other than to their negative examples. The negative example INLINEFORM2 for each word INLINEFORM3 in any Attract pair is always the vector closest to INLINEFORM4 taken from the pairs in the current mini-batch, distinct from the other word paired with INLINEFORM5 , and INLINEFORM6 itself.", "The second INLINEFORM0 term is the regularisation which aims to retain the semantic information encoded in the initial distributional space as long as this information does not contradict the used Attract constraints. Let INLINEFORM1 refer to the initial distributional vector of the word INLINEFORM2 and let INLINEFORM3 be the set of all word vectors present in the given mini-batch. If INLINEFORM4 denotes the L2 regularisation constant, this term can be expressed as:", " DISPLAYFORM0 ", "", "The fine-tuning procedure effectively blends the knowledge from external resources (i.e., the input Attract set of constraints) with distributional information extracted directly from large corpora. We show how to propagate annotations from a knowledge source such as VerbNet from source to target by combining two types of constraints within the specialisation framework: a) cross-lingual (translation) links between languages, and b) available VerbNet annotations in a resource-rich language transformed into pairwise constraints. Cross-lingual constraints such as (pl_wojna, it_guerra) are extracted from BabelNet BIBREF36 , a large-scale resource which groups words into cross-lingual babel synsets (and is currently available for 271 languages). The wide and steadily growing coverage of languages in BabelNet means that our proposed framework promises to support the transfer of VerbNet-style information to numerous target languages (with increasingly high accuracy).", "To establish that the proposed transfer approach is in fact independent of the chosen cross-lingual information source, we also experiment with another cross-lingual dictionary: PanLex BIBREF38 , which was used in prior work on cross-lingual word vector spaces BIBREF52 , BIBREF53 . This dictionary currently covers around 1,300 language varieties with over 12 million expressions, thus offering support also for low-resource transfer settings.", "VerbNet constraints are extracted from the English VerbNet class structure in a straightforward manner. For each class INLINEFORM0 from the 273 VerbNet classes, we simply take the set of all INLINEFORM1 verbs INLINEFORM2 associated with that class, including its subclasses, and generate all unique pairs INLINEFORM3 so that INLINEFORM4 and INLINEFORM5 . Example VerbNet pairwise constraints are shown in Tab. TABREF15 . Note that VerbNet classes in practice contain verb instances standing in a variety of lexical relations, including synonyms, antonyms, troponyms, hypernyms, and the class membership is determined on the basis of connections between the syntactic patterns and the underlying semantic relations BIBREF54 , BIBREF55 ." ], [ "Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details." ], [ "Cross-Lingual Transfer Model F-1 verb classification scores for the six target languages with different sets of constraints are summarised in Fig. FIGREF29 . We can draw several interesting conclusions. First, the strongest results on average are obtained with the model which transfers the VerbNet knowledge from English (as a resource-rich language) to the resource-lean target language (providing an answer to question Q3, Sect. SECREF1 ). These improvements are visible across all target languages, empirically demonstrating the cross-lingual nature of VerbNet-style classifications. Second, using cross-lingual constraints alone (XLing) yields strong gains over initial distributional spaces (answering Q1 and Q2). Fig. FIGREF29 also shows that cross-lingual similarity constraints are more beneficial than the monolingual ones, despite a larger total number of the monolingual constraints in each language (see Tab. TABREF18 ). This suggests that such cross-lingual similarity links are strong implicit indicators of class membership. Namely, target language words which map to the same source language word are likely to be synonyms and consequently end up in the same verb class in the target language. However, the cross-lingual links are even more useful as means for transferring the VerbNet knowledge, as evidenced by additional gains with XLing+VerbNet-EN.", "The absolute classification scores are the lowest for the two Slavic languages: pl and hr. This may be partially explained by the lowest number of cross-lingual constraints for the two languages covering only a subset of their entire vocabularies (see Tab. TABREF18 and compare the total number of constraints for hr and pl to the numbers for e.g. fi or fr). Another reason for weaker performance of these two languages could be their rich morphology, which induces data sparsity both in the initial vector space estimation and in the coverage of constraints." ], [ "This work has proven the potential of transferring lexical resources from resource-rich to resource-poor languages using general-purpose cross-lingual dictionaries and bilingual vector spaces as means of transfer within a semantic specialisation framework. However, we believe that the proposed basic framework may be upgraded and extended across several research paths in future work.", "First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently BIBREF79 , BIBREF80 , the current lack of polysemy-aware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the Attract-Repel specialisation framework for sense-aware cross-lingual transfer relying on recently developed multi-sense/prototype word representations BIBREF81 , BIBREF82 .", "Another challenge is to apply the idea from this work to enable cross-lingual transfer of other structured lexical resources available in English such as FrameNet BIBREF44 , PropBank BIBREF45 , and VerbKB BIBREF83 . Other potential research avenues include porting the approach to other typologically diverse languages and truly low-resource settings (e.g., with only limited amounts of parallel data), as well as experiments with other distributional spaces, e.g. BIBREF84 . Further refinements of the specialisation and clustering algorithms may also result in improved verb class induction." ], [ "We have presented a novel cross-lingual transfer model which enables the automatic induction of VerbNet-style verb classifications across multiple languages. The transfer is based on a word vector space specialisation framework, utilised to directly model the assumption of cross-linguistic validity of VerbNet-style classifications. Our results indicate strong improvements in verb classification accuracy across all six target languages explored. All automatically induced VerbNets are available at:", "github.com/cambridgeltl/verbnets." ], [ "This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to the entire LEXICAL team, especially to Roi Reichart, and also to the three anonymous reviewers for their helpful and constructive suggestions." ] ], "section_name": [ "Introduction", "Vector Space Specialisation", "Clustering Algorithm", "Results and Discussion", "Further Discussion and Future Work", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "8d8ac5e2871d148fadde8640840d62386779dea8" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "851f3497ec7565c6c76be5fd6f4b51b25c6a9eeb" ], "answer": [ { "evidence": [ "Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details." ], "extractive_spans": [ "MNCut spectral clustering algorithm BIBREF58" ], "free_form_answer": "", "highlighted_evidence": [ "Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "40e021e9e2f011e83fe41240f3486e34e08eb325" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2fb4949f431cba608c1f821387fb78a6a8ec99f8" ], "answer": [ { "evidence": [ "Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.", "Results and Discussion" ], "extractive_spans": [], "free_form_answer": "Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI).", "highlighted_evidence": [ "This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.\n\nResults and Discussion" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What baseline is used for the verb classification experiments?", "What clustering algorithm is used on top of the VerbNet-specialized representations?", "How many words are translated between the cross-lingual translation pairs?", "What are the six target languages?" ], "question_id": [ "83f24e4bbf9de82d560cdde64b91d6d672def6bf", "6b8a3100895f2192e08973006474428319dc298e", "daf624f7d1623ccd3facb1d93d4d9d616b3192f4", "74261f410882551491657d76db1f0f2798ac680f" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "irony", "irony", "irony", "irony" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Transferring VerbNet information from a resource-rich to a resource-lean language through a word vector space: an English→ French toy example. Representations of words described by two types of ATTRACT constraints are pulled closer together in the joint vector space. (1) Monolingual pairwise constraints in English (e.g., (en_ruin, en_shatter), (en_destroy, en_undo)) reflect the EN VerbNet structure and are generated from the readily available verb classification in English (solid lines). They are used to specialise the distributional EN vector subspace for the VerbNet relation. (2) Cross-lingual English-French pairwise constraints (extracted from BabelNet) describe cross-lingual synonyms (i.e., translation links) such as (en_ruin, fr_ruiner) or (en_shatter, fr_fracasser) (dashed lines). The post-processing fine-tuning specialisation procedure based on (1) and (2) effectively transforms the initial distributional French vector subspace to also emphasise the VerbNet-style structure, facilitating the induction of verb classes in French.", "Table 1: Example pairwise ATTRACT constraints extracted from three VerbNet classes in English.", "Table 2: Statistics of the experimental setup for each target language: training/test data and constraints. Coverage refers to the percentage of test verbs represented in the target language vocabularies.", "Figure 2: F-1 scores in six target languages using the post-processing specialisation procedure from Sect. 2.1 and different sets of constraints: Distributional refers to the initial vector space in each target language; Mono-Syn is the vector space tuned using monolingual synonymy constraints from BabelNet; XLing uses cross-lingual EN-TARGET constraints from BabelNet (TARGET refers to any of the six target languages); XLing+VerbNet-EN is a fine-tuned vector space which uses both cross-lingual EN-TARGET constraints plus EN VerbNet constraints. Results are provided with (a) SGNS-BOW2 and (b) SGNS-DEPS source vector space in English for the XLing and XLing+VerbNet variants, see Sect. 3.", "Table 3: The effect of multilingual vector space specialisation. Results are reported for FR and IT using: a) cross-lingual constraints only (XLing); and b) the VerbNet transfer model (XLing+VN).", "Table 4: Comparison of verb classification (VC) and verb semantic similarity (Sim) for English. VC is measured on the EN test set of Sun et al. (2008). Sim is measured on SimVerb-3500 (Gerz et al., 2016). The scores are Spearman’s ρ correlation scores. EN-Dist is the initial distributional English vector space: SGNS-BOW2; EN-VN is the same space transformed using monolingual EN VerbNet constraints only, an upper bound for the specialisation-based approach in EN.", "Figure 3: F-1 scores when PanLex is used as the source of cross-lingual ATTRACT constraints (instead of BabelNet). EN Vectors: SGNS-BOW2." ], "file": [ "3-Figure1-1.png", "5-Table1-1.png", "6-Table2-1.png", "7-Figure2-1.png", "8-Table3-1.png", "8-Table4-1.png", "9-Figure3-1.png" ] }
[ "What are the six target languages?" ]
[ [ "1707.06945-Clustering Algorithm-0" ] ]
[ "Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI)." ]
421
1802.05574
Open Information Extraction on Scientific Text: An Evaluation
Open Information Extraction (OIE) is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering. While OIE methods are targeted at being domain independent, they have been evaluated primarily on newspaper, encyclopedic or general web text. In this article, we evaluate the performance of OIE on scientific texts originating from 10 different disciplines. To do so, we use two state-of-the-art OIE systems applying a crowd-sourcing approach. We find that OIE systems perform significantly worse on scientific text than encyclopedic text. We also provide an error analysis and suggest areas of work to reduce errors. Our corpus of sentences and judgments are made available.
{ "paragraphs": [ [ " This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/", "The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 .", "While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text.", "Specifically, we aim to test two hypotheses:", "Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available.", "The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude." ], [ "OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.", "This seems sensible but is actually problematic because of different but equivalent representations of the information in an article. For example, consider the sentence “The patient was treated with Emtricitabine, Etravirine, and Darunavir”. One possible extraction is:", "(The patient :: was treated with :: Emtricitabine, Etravirine, and Darunavir)", "Another possible extraction is:", "(The patient :: was treated with :: Emtricitabine)", "(The patient :: was treated with :: Etravirine)", "(The patient :: was treated with :: Darunavir)", "Neither of these is wrong, but by choosing one approach or the other a pre-constructed gold set will falsely penalize a system that uses the other approach.", "From such evaluations and their own cross dataset evaluation, BIBREF8 list the following common errors committed by OIE systems:", "In our evaluation, we take a different approach. We do not define ground truth relation extractions from the sentences in advance. Instead, we manually judge the correctness of each extraction after the fact. We feel that this is the crux of the information extraction challenge. Is what is being extracted correct or not? This approach enables us to consider many more relations through the use of a crowd-sourced annotation process. Our evaluation approach is similar to the qualitative analysis performed in BIBREF8 and the evaluation performed in BIBREF7 . However, our evaluation is able to use more judges (5 instead of 2) because we apply crowd sourcing. For our labelling instructions, we adapted those used by BIBREF7 to the crowd sourcing setting.", "As previously noted existing evaluations have also only looked at encyclopedic or newspaper corpora. Several systems (e.g. BIBREF4 , BIBREF9 ) have looked at text from the web as well, however, as far as we know, none have specifically looked at evaluation for scientific and medical text." ], [ "We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.", "We note that both OpenIE 4 and MiniIE support relation extractions that go beyond binary tuples, supporting the extraction of n-ary relations. We note that the most recent version of Open IE (version 5) is focused on n-ary relations. For ease of judgement, we focused on binary relations. Additionally, both systems support the detection of negative relations.", "In terms of settings, we used the off the shelf settings for OpenIE 4. For MinIE, we used their “safe mode\" option, which uses slightly more aggressive minimization than the standard setting. In the recent evaluation of MiniIE, this setting performed roughly on par with the default options BIBREF7 . Driver code showing how we ran each system is available." ], [ "We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.", "The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.", "We randomly selected 2 sentences with more than two words from each paper using the simple text version of the paper. We maintained the id of the source article and the line number for each sentence." ], [ "We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.", "The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers.", "Note, to ensure the every HIT had 10 sentences, some sentences were duplicated. Furthermore, we did not mandate that all workers complete all HITS.", "We followed recommended practices for the use of crowd sourcing in linguistics BIBREF11 . We used Amazon Mechanical Turk as a means to present the sentences and their corresponding triples to a crowd for annotation. Within Mechanical Turk tasks are called Human Intelligence Tasks (HITs). To begin, we collected a small set of sentences and triples with known correct answers. We did this by creating a series of internal HITs and loaded them the Mechanical Turk development environment called the Mechanical Turk Sandbox. The HITs were visible to a trusted group of colleagues who were asked to complete the HITs.", "Having an internal team of workers attempt HITs provides us with two valuable aspects of the eventual production HITs. First, internal users are able to provide feedback related to usability and clarity of the task. They were asked to read the instructions and let us know if there was anything that was unclear. After taking the HITs, they are able to ask questions about anomalies or confusing situations they encounter and allow us to determine if specific types of HITs are either not appropriate for the task or might need further explanation in the instructions. In addition to the internal users direct feedback, we were also able to use the Mechanical Turk Requester functionality to monitor how long (in minutes and seconds) it took each worker to complete each HIT. This would come into factor how we decided on how much to pay each Worker per HIT after they were made available to the public.", "The second significant outcome from the internal annotations was the generation of a set of `expected' correct triples. Having a this set of annotations is an integral part of two aspects of our crowdsourcing process. First, it allows us to create a qualification HIT. A qualification HIT is a HIT that is made available to the public with the understanding the Workers will be evaluated based on how closely they matched the annotations of the internal annotators. Based upon this, the Workers with the most matches would be invited to work on additional tasks. Second, we are able to add the internal set of triples randomly amongst the other relations we were seeking to have annotated. This allows us to monitor quality of the individual Workers over the course of the project. Note, none of this data was used in the actual evaluation. It was only for the purposes of qualifying Workers.", "We are sensitive to issues that other researchers have in regards to Mechanical Turk Workers earning fair payment in exchange for their contributions to the HITs BIBREF12 . We used the time estimates from our internal annotation to price the task in order to be above US minimum wage. All workers were qualified before being issued tasks. Overall, we employed 10 crowd workers. On average it took 30 minutes for a worker to complete a HIT. In line with BIBREF13 , we monitored for potential non-performance or spam by looking for long response times and consecutive submitted results. We saw no indicators of low quality responses." ], [ "In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin.", "Table 1 shows examples of triples that were associated with higher disagreement between annotators. One can see for example, in the third example, that annotators might be confused by the use of a pronoun (him). Another example is in the last sentence of the table, where one can see that there might be disagreement on whether the subsequent prepositional phrase behind light microscope analysis should be included as part of the extracted triple.", "We take the variability of judgements into account when using this data to compute the performance of the two extraction tools. Hence, to make assessments as to whether a triple correctly reflects the content from which it is extracted, we rely on the unanimous positive agreement between crowd workers. That is to say that if we have 100% inter-annotator agreement that a triple was correctly extracted we label it as correct." ], [ "Table 2 show the results for the combinations of systems and data sources. The Correct Triples column contains the number of triples that are labelled as being correct by all annotators. Total Triples are the total number of triples extracted by the given systems over the specified data. Precision is calculated as typical where Correct Triples are treated as true positives. On average, 3.1 triples were extracted per sentence.", "Figure 1 shows the performance of extractors in terms of precision as inter-annotator agreement decreases. In this figure, we look only at agreement on triples where the majority agree that the triple is correct. Furthermore, to ease comparison, we only consider triples with 5 judgements this excludes 9 triples. We indicate not only the pair-wise inter-annotator agreement but also the number of annotators who have judged a triple to be correct. For example, at the 40% agreement level at least 3 annotators have agreed that a triple is true. The figure separates the results by extractor and by data source.", "We see that as expected the amount of triples agreed to as correct grows larger as we relax the requirement for agreement. For example, analyzing Open IE's results, at the 100% agreement level we see a precision of 0.56 whereas at the 40% agreement level we see a precision of 0.78. Table 3 shows the total number of correct extractions at the three agreement levels." ], [ "From the data, we see that extractors perform better on sentences from Wikipedia (0.54 P) than scientific text (0.34 P). Additionally, we see that there is higher annotator agreement on whether triples extracted from Wikipedia and scientific text are correct or incorrect: 0.80 - SD 0.24 (WIKI) vs. 0.72 - SD 0.25 (SCI). A similar difference in agreement is observed when only looking at triples that are considered to be correct by the majority of annotators: 0.87 - SD 0.21 (WIKI) vs. 0.78 - SD 0.25 (SCI) . In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. The differences between data sources are also seen when looking at the individual extraction tools. For instance, for Open IE 4 the precision is 0.19 higher for wikipedia extractions over those from scientific text. With this evidence, we reject our first hypothesis that the performance of these extractors are similar across data sources." ], [ "We also compare the output of the two extractors. In terms precision, Open IE 4 performs much better across the two datasets (0.56P vs 0.39P). Looking at triples considered to be correct by the majority of annotators, we see that Open IE 4 has higher inter-annotator agreement 0.87 - SD 0.22 (Open IE) vs 0.81 - SD 0.24 (MinIE). Focusing on scientific and medical text (SCI), again where the triples are majority annotated as being correct, Open IE has higher inter-annotator agreement (Open IE: 0.83 - SD 0.24 vs MiniIE: 0.76 - SD 0.25). In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. This leads us to conclude that Open IE produces triples that annotators are more likely to agree as being correct.", "MinIE provides many more correct extractions than OpenIE 4 (935 more across both datasets). The true recall numbers of the two systems can not be calculated with the data available, but the 40% difference in the numbers of correct extractions is strong evidence that the two systems do not have equivalent behavior.", "A third indication of differences in their outputs comes from examining the complexity of the extracted relations. Open IE 4 generates longer triples on average (11.5 words) vs. 8.5 words for MinIE across all argument positions. However, Open IE 4 generates shorter relation types than MinIE (Open IE - 3.7 words; MiniIE 6.27 words) and the standard deviation in terms of word length is much more compact for Open IE 4 - 1 word vs 3 words for MinIE. Overall, our conclusion is that Open IE 4 performs better than MinIE both in terms of precision and compactness of relation types, while not matching MinIE's recall, and thus we reject our second hypothesis. " ], [ "The amount of triples extracted from the scientific text is slightly larger than that extracted from the Wikipedia text. This follows from the fact that the scientific sentences are on average roughly 7 words longer than encyclopedic text.", "The results of our experiment also confirm the notion that an unsupervised approach to extracting relations is important. We have identified 698 unique relation types that are part of triples agreed to be correct by all annotators. This number of relation types is derived from only 400 sentences. While not every relation type is essential for downstream tasks, it is clear that building specific extractors for each relation type in a supervised setting would be difficult." ], [ "We now look more closely at the various errors that were generated by the two extractors.", "Table 4 shows the sentences in which neither extractor produced triples. We see 3 distinct groups. The first are phrases that are incomplete sentences usually originating from headings (e.g. Materials and methods). The next group are descriptive headings potentially coming from paper titles or figure captions. We also see a group with more complex prepositional phrases. In general, these errors could be avoided by being more selective of the sentences used for random selection. Additionally, these systems could look at potentially just extracting noun phrases with variable relation types, hence, expressing a cooccurrence relation.", "We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.", "We also see similar errors to those pointed out by BIBREF8 , namely, uninformative extractions, the difficulty in handling n-ary relations that are latent in the text, difficulties handling negations, and very large argument lengths. In general, these errors together point to several areas for further improvement including:" ], [ "The pace of change in the scientific literature means that interconnections and facts in the form of relations between entities are constantly being created. Open information extraction provides an important tool to keep up with that pace of change. We have provided evidence that unsupervised techniques are needed to be able to deal with the variety of relations present in text. The work presented here provides an independent evaluation of these tools in their use on scientific text. Past evaluations have focused on encyclopedic or news corpora which often have simpler structures. We have shown that existing OIE systems perform worse on scientific and medical content than on general audience content.", "There are a range of avenues for future work. First, the application of Crowd Truth framework BIBREF15 in the analysis of these results might prove to be useful as we believe that the use of unanimous agreement tends to negatively impact the perceived performance of the OIE tools. Second, we think the application to n-ary relations and a deeper analysis of negative relations would be of interest. To do this kind of evaluation, an important area of future work is the development of guidelines and tasks for more complex analysis of sentences in a crowd sourcing environment. The ability, for example, to indicate argument boundaries or correct sentences can be expected of expert annotators but needs to implemented in a manner that is efficient and easy for the general crowd worker. Third, we would like to expand the evaluation dataset to an even larger numbers of sentences. Lastly, there are a number of core natural language processing components that might be useful for OIE in this setting, for example, the use of syntactic features as suggested by BIBREF16 . Furthermore, we think that coreference is a crucial missing component and we are actively investigating improved coreference resolution for scientific texts.", "To conclude, we hope that this evaluation provides further insights for implementors of these extraction tools to deal with the complexity of scientific and medical text." ] ], "section_name": [ "Introduction", "Existing Evaluation Approaches", "Systems", "Datasets", "Annotation Process", "Judgement Data and Inter-Annotator Agreement", "Experimental Results", "Testing H1: Comparing the Performance of OIE on Scientific vs. Encyclopedic Text", "Testing H2: Comparing the Performance of Systems ", "Other Observations", "Error Analysis", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "3003cf1d33c198e21640ab6bc76664481925b8a2" ], "answer": [ { "evidence": [ "We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.", "The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.", "We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.", "In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin." ], "extractive_spans": [], "free_form_answer": "440 sentences, 2247 triples extracted from those sentences, and 11262 judgements on those triples.", "highlighted_evidence": [ "he first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . ", "The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus.", "In total 2247 triples were extracted.", "In total, 11262 judgements were obtained after running the annotation process." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "annotation_id": [ "7139ec92a1d9f42bdb0cd1b66e3550da2c4c36a5" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "annotation_id": [ "516d034dc63d0135d0ae3240e95d9a4fe218cb4d" ], "answer": [ { "evidence": [ "We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences." ], "extractive_spans": [ "all annotators that a triple extraction was incorrect" ], "free_form_answer": "", "highlighted_evidence": [ "We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "annotation_id": [ "3bcc7f0e13c7b4794caed7180d8d153bb5b90e14" ], "answer": [ { "evidence": [ "We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore." ], "extractive_spans": [], "free_form_answer": "OpenIE4 and MiniIE", "highlighted_evidence": [ "he first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 .", "The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "annotation_id": [ "e0debcb89938b5d963c74bba115ab1ff7d889862" ], "answer": [ { "evidence": [ "The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers." ], "extractive_spans": [ "Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence." ], "free_form_answer": "", "highlighted_evidence": [ "Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "What is the size of the released dataset?", "Were the OpenIE systems more accurate on some scientific disciplines than others?", "What is the most common error type?", "Which OpenIE systems were used?", "What is the role of crowd-sourcing?" ], "question_id": [ "63c0128935446e26eacc7418edbd9f50cba74455", "9a94dcee17cdb9a39d39977191e643adece58dfc", "18e915b917c81056ceaaad5d6581781c0168dac9", "9c68d6d5451395199ca08757157fbfea27f00f69", "372fbf2d120ca7a101f70d226057f9639bf1f9f2" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "information extraction", "information extraction", "information extraction", "information extraction", "information extraction" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Examples of difficult to judge triples and their associated sentences.", "Table 2: Results of triples extracted from the SCI and WIKI corpora using the Open IE and MinIE tools.", "Figure 1: Precision at various agreement levels. Agreement levels are shown as the proportion of overall agreement. In addition, we indicate the the minimum number of annotators who considered relations correct out of the total number of annotators.", "Table 3: Correct triples at different levels of agreement subsetted by system and data source . Agreement levels follow from Figure 1", "Table 4: Sentences in which no triples were extracted" ], "file": [ "5-Table1-1.png", "5-Table2-1.png", "6-Figure1-1.png", "6-Table3-1.png", "8-Table4-1.png" ] }
[ "What is the size of the released dataset?", "Which OpenIE systems were used?" ]
[ [ "1802.05574-Datasets-1", "1802.05574-Datasets-0", "1802.05574-Judgement Data and Inter-Annotator Agreement-0", "1802.05574-Annotation Process-0" ], [ "1802.05574-Systems-0" ] ]
[ "440 sentences, 2247 triples extracted from those sentences, and 11262 judgements on those triples.", "OpenIE4 and MiniIE" ]
424
1705.00108
Semi-supervised sequence tagging with bidirectional language models
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
{ "paragraphs": [ [ "Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .", "However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .", "Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 .", "In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context.", "Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task.", "As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers." ], [ "The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3)." ], [ "Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).", "Given a sentence of tokens INLINEFORM0 it first forms a representation, INLINEFORM1 , for each token by concatenating a character based representation INLINEFORM2 with a token embedding INLINEFORM3 : DISPLAYFORM0 ", " The character representation INLINEFORM0 captures morphological information and is either a convolutional neural network (CNN) BIBREF4 , BIBREF8 or RNN BIBREF3 , BIBREF5 . It is parameterized by INLINEFORM1 with parameters INLINEFORM2 . The token embeddings, INLINEFORM3 , are obtained as a lookup INLINEFORM4 , initialized using pre-trained word embeddings, and fine tuned during training BIBREF2 .", "To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, INLINEFORM0 , the hidden state INLINEFORM1 of RNN layer INLINEFORM2 is formed by concatenating the hidden states from the forward ( INLINEFORM3 ) and backward ( INLINEFORM4 ) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token INLINEFORM5 . More formally, for the first RNN layer that operates on INLINEFORM6 to output INLINEFORM7 : DISPLAYFORM0 ", " The second RNN layer is similar and uses INLINEFORM0 to output INLINEFORM1 . In this paper, we use INLINEFORM2 layers of RNNs in all experiments and parameterize INLINEFORM3 as either Gated Recurrent Units (GRU) BIBREF9 or Long Short-Term Memory units (LSTM) BIBREF10 depending on the task.", "Finally, the output of the final RNN layer INLINEFORM0 is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss BIBREF11 using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to BIBREF2 ." ], [ "A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1 ", "Recent state of the art neural language models BIBREF12 use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history INLINEFORM0 into a fixed dimensional vector INLINEFORM1 . This is the forward LM embedding of the token at position INLINEFORM2 and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token INLINEFORM3 using a softmax layer over words in the vocabulary.", "The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with INLINEFORM0 tokens, it computes INLINEFORM1 ", "A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding INLINEFORM0 , for the sequence INLINEFORM1 , the output embeddings of the top layer LSTM.", "In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters." ], [ "Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0 ", "There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing ( EQREF9 ) with INLINEFORM0 where INLINEFORM1 is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work." ], [ "We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0." ], [ "Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).", "In the CoNLL 2003 NER task, our model scores 91.93 mean INLINEFORM0 , which is a statistically significant increase over the previous best result of 91.62 INLINEFORM1 from BIBREF8 that used gazetteers (at 95%, two-sided Welch t-test, INLINEFORM2 ).", "In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean INLINEFORM0 , exceeding all previously published results without additional labeled data by more then 1% absolute INLINEFORM1 . The improvement over the previous best result of 95.77 in BIBREF6 that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% ( INLINEFORM2 assuming standard deviation of INLINEFORM3 ).", "Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 INLINEFORM0 in the NER and Chunking tasks, respectively.", "Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables TABREF17 and TABREF18 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, BIBREF3 noted an improvement of only 0.06 INLINEFORM0 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and BIBREF8 reported an increase of 0.71 INLINEFORM1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in INLINEFORM2 when including supervised labels from the PTB POS tags or CoNLL 2003 entities BIBREF3 , BIBREF7 , BIBREF6 ." ], [ "To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task.", "In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings INLINEFORM0 to:", "augment the input of the first RNN layer; i.e.,", " INLINEFORM0 ,", "augment the output of the first RNN layer; i.e., INLINEFORM0 , and", "augment the output of the second RNN layer; i.e., INLINEFORM0 .", "Table TABREF20 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with BIBREF7 who found that chunking performance was sensitive to the level at which additional POS supervision was added.", "In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table TABREF21 .", "We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with INLINEFORM0 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM.", "LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves INLINEFORM0 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance.", "To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data.", "To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 INLINEFORM0 , well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment.", "A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from BIBREF3 that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test INLINEFORM0 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% INLINEFORM1 for a similar comparison with the full training dataset. The analogous increases in BIBREF3 are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% INLINEFORM2 for transfer from PTB POS tags. However, they found only a 0.06% INLINEFORM3 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets.", "Our TagLM formulation increases the number of parameters in the second RNN layer INLINEFORM0 due to the increase in the input dimension INLINEFORM1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% INLINEFORM2 ) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test INLINEFORM3 increased slightly to INLINEFORM4 indicating that the additional parameters in TagLM are slightly hurting performance.", "One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE. ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased INLINEFORM0 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 BIBREF20 . We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain." ], [ "In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples." ], [ "We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version." ] ], "section_name": [ "Introduction", "Overview", "Baseline sequence tagging model", "Bidirectional LM", "Combining LM with sequence model", "Experiments", "Overall system results", "Analysis", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "5113446c5057085c6fda63fc2324d5788a26a8b6" ], "answer": [ { "evidence": [ "In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters." ], "extractive_spans": [], "free_form_answer": "They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs.", "highlighted_evidence": [ "In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "69ffadad2399c928352e38923ee888693f38721e" ], "answer": [ { "evidence": [ "We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text." ], "extractive_spans": [], "free_form_answer": "micro-averaged F1", "highlighted_evidence": [ "We report the official evaluation metric (micro-averaged INLINEFORM0 ). ", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "36c3c00b3d17157779f1a8d368f1102d5fd27a06" ], "answer": [ { "evidence": [ "Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task." ], "extractive_spans": [], "free_form_answer": "91.93% F1 score on CoNLL 2003 NER task and 96.37% F1 score on CoNLL 2000 Chunking task", "highlighted_evidence": [ "When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "303053006f9ee68f6ac62f319b5b7f36a63a28a1" ], "answer": [ { "evidence": [ "Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.", "FLOAT SELECTED: Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.", "FLOAT SELECTED: Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).", "FLOAT SELECTED: Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data)." ], "extractive_spans": [], "free_form_answer": "Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), Søgaard and Goldberg (2016) ", "highlighted_evidence": [ "Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. ", "FLOAT SELECTED: Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.", "FLOAT SELECTED: Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.", "FLOAT SELECTED: Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).", "FLOAT SELECTED: Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "46fcdc5cf0ea7f102ea25b575bc63cc2bf3e894c" ], "answer": [ { "evidence": [ "We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0." ], "extractive_spans": [ "CoNLL 2003", "CoNLL 2000" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "question": [ "how are the bidirectional lms obtained?", "what metrics are used in evaluation?", "what results do they achieve?", "what previous systems were compared to?", "what are the evaluation datasets?" ], "question_id": [ "c000a43aff3cb0ad1cee5379f9388531b5521e9a", "a5b67470a1c4779877f0d8b7724879bbb0a3b313", "12cfbaace49f9363fcc10989cf92a50dfe0a55ea", "4640793d82aa7db30ad7b88c0bf0a1030e636558", "a9c5252173d3df1c06c770c180a77520de68531b" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Figure 1: The main components in TagLM, our language-model-augmented sequence tagging system. The language model component (in orange) is used to augment the input token representation in a traditional sequence tagging models (in grey).", "Figure 2: Overview of TagLM, our language model augmented sequence tagging architecture. The top level embeddings from a pre-trained bidirectional LM are inserted in a stacked bidirectional RNN sequence tagging model. See text for details.", "Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text.", "Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.", "Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources).", "Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data).", "Table 5: Comparison of CoNLL-2003 test set F1 when the LM embeddings are included at different layers in the baseline tagger.", "Table 6: Comparison of CoNLL-2003 test set F1 for different language model combinations. All language models were trained and evaluated on the 1B Word Benchmark, except LSTM-512-256∗ which was trained and evaluated on the standard splits of the NER CoNLL 2003 dataset." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png" ] }
[ "how are the bidirectional lms obtained?", "what metrics are used in evaluation?", "what results do they achieve?", "what previous systems were compared to?" ]
[ [ "1705.00108-Bidirectional LM-4" ], [ "1705.00108-Experiments-0", "1705.00108-5-Table1-1.png" ], [ "1705.00108-Introduction-4" ], [ "1705.00108-5-Table2-1.png", "1705.00108-5-Table1-1.png", "1705.00108-6-Table3-1.png", "1705.00108-Overall system results-0", "1705.00108-6-Table4-1.png" ] ]
[ "They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs.", "micro-averaged F1", "91.93% F1 score on CoNLL 2003 NER task and 96.37% F1 score on CoNLL 2000 Chunking task", "Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), Søgaard and Goldberg (2016) " ]
426
2002.06053
Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery
Text-based representations of chemicals and proteins can be thought of as unstructured languages codified by humans to describe domain-specific knowledge. Advances in natural language processing (NLP) methodologies in the processing of spoken languages accelerated the application of NLP to elucidate hidden knowledge in textual representations of these biochemical entities and then use it to construct models to predict molecular properties or to design novel molecules. This review outlines the impact made by these advances on drug discovery and aims to further the dialogue between medicinal chemists and computer scientists.
{ "paragraphs": [ [ "The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.", "The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts.", "We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18.", "Today, the era of “big data\" boosts the “learning\" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines.", "With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding\" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences." ], [ "BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.", "All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review." ], [ "The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic\" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).", "To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems." ], [ "Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information.", "Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP." ], [ "The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/)." ], [ "The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound." ], [ "The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html).", "SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28).", "Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/\", “\\\", “@\", “@@\")." ], [ "DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)\") introducing longer sequences." ], [ "SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs\" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES." ], [ "InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/\" symbol.", "The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence." ], [ "SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html)." ], [ "SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product\" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41." ], [ "Similar to words in natural languages, we can assume that the “words\" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages." ], [ "One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO\", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C\", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43.", "$k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49." ], [ "The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43." ], [ "BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words\" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words\" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words\" are shorter compared to natural product “words\"." ], [ "Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective." ], [ "Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words\" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words." ], [ "Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences.", "Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses\" rather than “words\" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence.", "SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering.", "To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community." ], [ "The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors.", "Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics." ], [ "In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C(", "C3=CC=CC=C3)N)C(=O)O)C\" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2\", “C1(C(N2C\", “1(C(N2C(\", “(C(N2C(S\",...,“N)C(=O)O\" ,“)C(=O)O)\" ,“C(=O)O)C\" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer.", "Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73." ], [ "The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC\" is lower than that of “(C(N2C(S\", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S\" in a compound may be more informative.", "TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking." ], [ "In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78." ], [ "The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S\" frequently appears together with the word “C(C2=O)N\" in SMILES strings, this might suggest that they have related “meanings\". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play.", "The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S\", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word.", "The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms.", "The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers.", "FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89." ], [ "Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language.", "Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language\" such as that of reactants and generate a novel text in another “language\" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation.", "RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O\", the model would predict the next character to be “)\" with a higher probability than “(\". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model.", "The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).", "Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108." ], [ "Machine translation finds use in cheminformatics in “translation\" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.", "The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism\" BIBREF114.", "Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118.", "A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning\" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words\" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words\" such as biologically interpretable fragments.", "Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results." ], [ "The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges." ], [ "The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions." ], [ "There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82." ], [ "Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126." ], [ "The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty." ], [ "The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community." ], [ "The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141.", "Unsupervised learning can be used on “big\" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics.", "Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation.", "Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems.", "With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation." ], [ "This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical." ] ], "section_name": [ "Introduction", "Introduction ::: NLP Basics", "Biochemical Language Processing", "Biochemical Language Processing ::: Textual Chemical Data", "Biochemical Language Processing ::: Textual Chemical Data ::: IUPAC name", "Biochemical Language Processing ::: Textual Chemical Data ::: Chemical Formula", "Biochemical Language Processing ::: Textual Chemical Data ::: SMILES", "Biochemical Language Processing ::: Textual Chemical Data ::: DeepSMILES", "Biochemical Language Processing ::: Textual Chemical Data ::: SELFIES", "Biochemical Language Processing ::: Textual Chemical Data ::: InChI", "Biochemical Language Processing ::: Textual Chemical Data ::: SMARTS", "Biochemical Language Processing ::: Textual Chemical Data ::: SMIRKS", "Biochemical Language Processing ::: Identification of Words/Tokens", "Biochemical Language Processing ::: Identification of Words/Tokens ::: @!START@$k$@!END@-mers (@!START@$n$@!END@-grams)", "Biochemical Language Processing ::: Identification of Words/Tokens ::: Longest Common Subsequences", "Biochemical Language Processing ::: Identification of Words/Tokens ::: Maximum Common Substructure", "Biochemical Language Processing ::: Identification of Words/Tokens ::: Minimum Description Length", "Biochemical Language Processing ::: Identification of Words/Tokens ::: Byte-Pair Encoding", "Biochemical Language Processing ::: Identification of Words/Tokens ::: Pattern-based words", "Biochemical Language Processing ::: Text representation", "Biochemical Language Processing ::: Text representation ::: Bag-of-words representation", "Biochemical Language Processing ::: Text representation ::: TF-IDF", "Biochemical Language Processing ::: Text representation ::: One-hot representation", "Biochemical Language Processing ::: Text representation ::: Distributed representations", "Biochemical Language Processing ::: Text generation", "Biochemical Language Processing ::: Text generation ::: Machine Translation", "Future Perspectives", "Future Perspectives ::: Challenges", "Future Perspectives ::: Challenges ::: Benchmarking", "Future Perspectives ::: Challenges ::: Reproducibility", "Future Perspectives ::: Challenges ::: Bias in data", "Future Perspectives ::: Challenges ::: Interpretability", "Future Perspectives ::: Opportunities", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "3038e61acca2a4af92e033f6aa6f00b2cdc1f3a5" ], "answer": [ { "evidence": [ "Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "dbaa722f85dc3abc2c26ba3553f6ca9f29ea1698" ], "answer": [ { "evidence": [ "Machine translation finds use in cheminformatics in “translation\" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.", "The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).", "Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108." ], "extractive_spans": [], "free_form_answer": "Both supervised and unsupervised, depending on the task that needs to be solved.", "highlighted_evidence": [ "Machine translation finds use in cheminformatics in “translation\" from one language (e.g. reactants) to another (e.g. products).", "The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101.", "Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "5b4987275ecda63cd6a316529978d4fbea890c55" ], "answer": [ { "evidence": [ "The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no" ], "question": [ "Are datasets publicly available?", "Are this models usually semi/supervised or unsupervised?", "Is there any concrete example in the paper that shows that this approach had huge impact on drug discovery?" ], "question_id": [ "a45edc04277a458911086752af4f17405501230f", "8c8a32592184c88f61fac1eef12c7d233dbec9dc", "16646ee77975fed372b76ce639e2664ae2105dcf" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "computer vision", "computer vision", "computer vision" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: The illustration of the Skip-Gram architecture of the Word2Vec algorithm. For a vocabulary of size V, each word in the vocabulary is described as a one-hot encoded vector (a binary vector in which only the corresponding word position is set to 1). The Skip-Gram architecture is a simple one hidden-layer neural network that aims to predict context (neighbor) words of a given target word. The extent of the context is determined by the window size parameter. In this example, the window size is equal to 1, indicating that the system will predict two context words (the word on the left and the word on the right of the target word) based on their probability scores. The number of nodes in the hidden layer (N) controls the size of the embedding vector. The weight matrix of VxN stores the trained embedding vectors.", "Figure 2: (Continued on the following page.)", "Figure 3: (Continued on the following page.)", "Table 1: NLP concepts and their applications in drug discovery", "Table 2: Widely used AI methodologies in NLP-based drug discovery studies", "Table 3: Commonly used databases in drug discovery", "Table 4: Different representations of the drug ampicillin" ], "file": [ "45-Figure1-1.png", "46-Figure2-1.png", "48-Figure3-1.png", "50-Table1-1.png", "51-Table2-1.png", "52-Table3-1.png", "53-Table4-1.png" ] }
[ "Are this models usually semi/supervised or unsupervised?" ]
[ [ "2002.06053-Biochemical Language Processing ::: Text generation-3", "2002.06053-Biochemical Language Processing ::: Text generation-4", "2002.06053-Biochemical Language Processing ::: Text generation ::: Machine Translation-0" ] ]
[ "Both supervised and unsupervised, depending on the task that needs to be solved." ]
427
1712.03547
Inducing Interpretability in Knowledge Graph Embeddings
We study the problem of inducing interpretability in KG embeddings. Specifically, we explore the Universal Schema (Riedel et al., 2013) and propose a method to induce interpretability. There have been many vector space models proposed for the problem, however, most of these methods don't address the interpretability (semantics) of individual dimensions. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.
{ "paragraphs": [ [ "Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, United States etc while the edges represent relationships between them.", "These KGs have grown huge, but they are still not complete BIBREF1 . Hence the task of inferring new facts becomes important. Many vector space models have been proposed which can perform reasoning over KGs efficiently BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF0 , BIBREF1 etc. These methods learn representations for entities and relations as vectors in a vector space, capturing global information about the KG. The task of KG inference is then defined as operations over these vectors. Some of these methods like BIBREF0 , BIBREF1 are capable of exploiting additional text data apart from the KG, resulting in better representations.", "Although these methods have shown good performance in applications, they don't address the problem of understanding semantics of individual dimensions of the KG embedding. A recent work BIBREF6 addressed the problem of learning semantic features for KGs. However, they don't directly use vector space modeling.", "In this work, we focus on incorporating interpretability in KG embeddings. Specifically, we aim to learn interpretable embeddings for KG entities by incorporating additional entity co-occurrence statistics from text data. This work is motivated by BIBREF7 who presented automated methods for evaluating topics learned via topic modelling methods. We adapt these measures for the vector space model and propose a method to directly maximize them while learning KG embedding. To the best of our knowledge, this work presents the first regularization term which induces interpretability in KG embeddings." ], [ "Several methods have been proposed for learning KG embeddings. They differ on the modeling of entities and relations, usage of text data and interpretability of the learned embeddings. We summarize some of these methods in following sections." ], [ "A very effective and powerful set of models are based on translation vectors. These models represent entities as vectors in $d$ -dimensional space, $\\mathbb {R}^d$ and relations as translation vectors from head entity to tail entity, in either same or a projected space. TransE BIBREF2 is one of the initial works, which was later improved by many works [ BIBREF3 , BIBREF4 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ]. Also, there are methods which are able to incorporate text data while learning KG embeddings. BIBREF0 is one such method, which assumes a combined universal schema of relations from KG as well as text. BIBREF1 further improves the performance by sharing parameters among similar textual relations." ], [ "While the vector space models perform well in many tasks, the semantics of learned representations are not directly clear. This problem for word embeddings was addressed by BIBREF12 where they proposed a set of constraints inducing interpretability. However, its adaptation for KG embeddings hasn't been addressed. A recent work BIBREF6 addressed a similar problem, where they learn coherent semantic features for entities and relations in KG. Our method differs from theirs in the following two aspects. Firstly, we use vector space modeling leading directly to KG embeddings while they need to infer KG embeddings from their probabilistic model. Second, we incorporate additional information about entities which helps in learning interpretable embeddings." ], [ "We are interested in inducing interpretability in KG embeddings and regularization is one good way to do it. So we want to look at novel regularizers in KG embeddings. Hence, we explore a measure of coherence proposed in BIBREF7 . This measure allows automated evaluation of the quality of topics learned by topic modeling methods by using additional Point-wise Mutual Information (PMI) for word pairs. It was also shown to have high correlation with human evaluation of topics.", "Based on this measure of coherence, we propose a regularization term. This term can be used with existing KG embedding methods (eg BIBREF0 ) for inducing interpretability. It is described in the following sections." ], [ "In topic models, coherence of a topic can be determined by semantic relatedness among top entities within the topic. This idea can also be used in vector space models by treating dimensions of the vector space as topics. With this assumption, we can use a measure of coherence defined in following section for evaluating interpretability of the embeddings.", " $Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.", "Coherence for top $k$ entities along dimension $l$ is defined as follows: ", "$$Coherence@k^{(l)} = \\sum _{i=2}^{k}\\sum _{j=1}^{i-1}{p_{ij}}$$ (Eq. 5) ", "where $p_{ij}$ is PMI score between entities $e_i$ and $e_j$ extracted from text data. $Coherence@k$ for the entity embedding matrix $\\theta _e$ is defined as the average over all dimensions. ", "$$Coherence@k = \\frac{1}{d} \\sum _{l=1}^{d} Coherence@k^{(l)}$$ (Eq. 6) ", "We want to learn an embedding matrix $\\theta _e$ which has high coherence (i.e. which maximizes $Coherence@k$ ). Since $\\theta _e$ changes during training, the set of top $k$ entities along each dimension varies over iterations. Hence, directly maximizing $Coherence@k$ seems to be tricky.", "An alternate approach could be to promote higher values for entity pairs having high PMI score $p_{ij}$ . This will result in an embedding matrix $\\theta _e$ with a high value of $Coherence@k$ since high PMI entity pairs are more likely to be among top $k$ entities.", "This idea can be captured by following coherence term ", "$$\\mathcal {C}(\\theta _e, P) = \\sum _{i=2}^{n}\\sum _{j=1}^{i-1} \\left\\Vert v(e_i)^\\intercal v(e_j) - p_{ij} \\right\\Vert ^2$$ (Eq. 8) ", "where $P$ is entity-pair PMI matrix and $v(e)$ denote vector for entity $e$ . This term can be used in the objective function defined in Equation 13 " ], [ "We use the Entity Model proposed in BIBREF0 for learning KG embeddings. This model assumes a vector $v(e)$ for each entity and two vectors $v_s(r)$ and $v_o(r)$ for each relation of the KG. The score for the triple $(e_s, r, e_o)$ is given by, ", "$$f(e_s, r, e_o) = v(e_s)^\\intercal v_s(r) + v(e_o)^\\intercal v_o(r)$$ (Eq. 10) ", "Training these vectors requires incorrect triples. So, we use the closed world assumption. For each triple $t \\in \\mathcal {T}$ , we create two negative triples $t^-_o$ and $t^-_s$ by corrupting the object and subject of the triples respectively such that the corrupted triples don't appear in training, test or validation data. The loss for a triple pair is defined as $loss(t, t^-) = - \\log (\\sigma (f(t) - f(t^-)))$ . Then, the aggregate loss function is defined as ", "$$L(\\theta _e, \\theta _r, \\mathcal {T}) = \\frac{1}{|\\mathcal {T}|}\\sum _{t\\in \\mathcal {T}} \\left(loss(t, t^-_o) + loss(t, t^-_s) \\right)$$ (Eq. 11) " ], [ "The overall loss function can be written as follows: ", "$$L(\\theta _e, \\theta _r, \\mathcal {T}) + \\lambda _c \\mathcal {C}(\\theta _e, P) + \\lambda _r \\mathcal {R}(\\theta _e, \\theta _r)$$ (Eq. 13) ", "Where $\\mathcal {R}(\\theta _e, \\theta _r) = \\frac{1}{2}\\left(\\left\\Vert \\theta _e\\right\\Vert ^2+\\left\\Vert \\theta _r\\right\\Vert ^2\\right)$ is the $L2$ regularization term and $\\lambda _c$ and $\\lambda _r$ are hyper-parameters controlling the trade-off among different terms in the objective function." ], [ "We use the FB15k-237 BIBREF13 dataset for experiments. It contains 14541 entities and 237 relations. The triples are split into training, validation and test set having 272115, 17535 and 20466 triples respectively. For extracting entity co-occurrences, we use the textual relations used in BIBREF1 . It contains around 3.7 millions textual triples, which we use for calculating PMI for entity pairs." ], [ "We use the method proposed in BIBREF0 as the baseline. Please refer to Section \"Entity Model (Model-E)\" for more details. For evaluating the learned embeddings, we test them on different tasks. All the hyper-parameters are tuned using performance (MRR) on validation data. We use 100 dimensions after cross validating among 50, 100 and 200 dimensions. For regularization, we use $\\lambda _r = 0.01$ (from $10,1,0.1,0.01$ ) and $\\lambda _c = 0.01$ (from $10,1,0.1,0.01$ ) for $L2$ and coherence regularization respectively. We use multiple random initializations sampled from a Gaussian distribution. For optimization, we use gradient descent and stop optimization when gradient becomes 0 upto 3 decimal places. The final performance measures are reported for test data." ], [ "In following sections, we compare the performance of the proposed method with the baseline method in different tasks. Please refer to Table 1 for results.", "For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities ", "$$\\text{AutoWI}(e_i) = \\sum _{j=1, j\\ne i}^{k+1}{p_{ij}}$$ (Eq. 18) ", "where $p_{ij}$ are the PMI scores. The entity having least score is identified as the intruder. We report the fraction of dimensions for which we were able to identify the intruder correctly.", "As we can see in Table 1 , the proposed method achieves better values for $Coherence@5$ as a direct consequence of the regularization term, thereby maximizing coherence between appropriate entities. Performance on the word intrusion task also improves drastically as the intruder along each dimension is a lot easier to identify owing to the fact that the top entities for each dimension group together more conspicuously.", "In this experiment, we test the model's ability to predict the best object entity for a given subject entity and relation. For each of the triples, we fix the subject and the relation and rank all entities (within same category as true object entity) based on their score according to Equation 10 . We report Mean Rank (MR) and Mean Reciprocal rank (MRR) of the true object entity and Hits@10 (the number of times true object entity is ranked in top 10) as percentage.", "The objective of the coherence regularization term being tangential to that of the original loss function, is not expected to affect performance on the link prediction task. However, the results show a trivial drop of $1.2$ in MRR as the coherence term gives credibility to triples that are otherwise deemed incorrect by the closed world assumption.", "We have used abbreviations for BS (Bachelor of Science), MS (Master of Science), UK (United Kingdom) and USA (United States of America). They appear as full form in the data.", "In this experiment, we test the model on classifying correct and incorrect triples. For finding incorrect triples, we corrupt the object entity with a randomly selected entity within the same category. For classification, we use validation data to find the best threshold for each relation by training an SVM classifier and later use this threshold for classifying test triples. We report the mean accuracy and mean AUC over all relations.", "We observe that the proposed method achieves slightly better performance for triple classification improving the accuracy by $4.4$ . The PMI information adds more evidence to the correct triples which are related in text data, generating a better threshold that more accurately distinguishes correct and incorrect triples." ], [ "Since our aim is to induce interpretability in representations, in this section, we evaluate the embeddings learned by the baseline as well as the proposed method. For both methods, we select some dimensions randomly and present top 5 entities along those dimensions. The results are presented in Table 2 .", "As we can see from the results, the proposed method produces more coherent entities than the baseline method." ], [ "In this work, we proposed a method for inducing interpretability in KG embeddings using a coherence regularization term. We evaluated the proposed and the baseline method on the interpretability of the learned embeddings. We also evaluated the methods on different KG tasks and compared their performance. We found that the proposed method achieves better interpretability while maintaining comparable performance on KG tasks. As next steps, we plan to evaluate the generalizability of the method with more recent KG embeddings." ] ], "section_name": [ "Introduction", "Related Work", "Vector-space models for KG Embeddings", "Interpretability of Embedding", "Proposed Method", "Coherence", "Entity Model (Model-E)", "Objective", "Datasets", "Experimental Setup", "Results", "Qualitative Analysis of Results", "Conclusion and Future Works" ] }
{ "answers": [ { "annotation_id": [ "bce54aa89e2f59080e2bf3c6ca440458d73863b0" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "annotation_id": [ "30868d9b249c3ea720cf766ea44e286bd6247b3e" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3)." ], "extractive_spans": [], "free_form_answer": "Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method.", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "annotation_id": [ "f5d7f7eb38cafbc62886385ff0afc50b90ab9212" ], "answer": [ { "evidence": [ "$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.", "For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities" ], "extractive_spans": [ "For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests." ], "free_form_answer": "", "highlighted_evidence": [ "$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.", "For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do the authors analyze what kinds of cases their new embeddings fail in where the original, less-interpretable embeddings didn't?", "When they say \"comparable performance\", how much of a performance drop do these new embeddings result in?", "How do they evaluate interpretability?" ], "question_id": [ "9c0cf1630804366f7a79a40934e7495ad9f32346", "a4d8fdcaa8adf99bdd1d7224f1a85c610659a9d3", "9ac923be6ada1ba2aa20ad62b0a3e593bb94e085" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "interpretability", "interpretability", "interpretability" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3).", "Table 2: Top 5 and bottom most entities for randomly selected dimensions. As we see, the proposed method produces more coherent entities compared to the baseline. Incoherent entities are marked in bold face. 2" ], "file": [ "4-Table1-1.png", "4-Table2-1.png" ] }
[ "When they say \"comparable performance\", how much of a performance drop do these new embeddings result in?" ]
[ [ "1712.03547-4-Table1-1.png" ] ]
[ "Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method." ]
428
1908.07218
CA-EHN: Commonsense Word Analogy from E-HowNet
Word analogy tasks have tended to be handcrafted, involving permutations of hundreds of words with dozens of relations, mostly morphological relations and named entities. Here, we propose modeling commonsense knowledge down to word-level analogical reasoning. We present CA-EHN, the first commonsense word analogy dataset containing 85K analogies covering 5K words and 6K commonsense relations. This was compiled by leveraging E-HowNet, an ontology that annotates 88K Chinese words with their structured sense definitions and English translations. Experiments show that CA-EHN stands out as a great indicator of how well word representations embed commonsense structures, which is crucial for future end-to-end models to generalize inference beyond training corpora. The dataset is publicly available at \url{https://github.com/jacobvsdanniel/CA-EHN}.
{ "paragraphs": [ [ "Commonsense reasoning is fundamental for natural language agents to generalize inference beyond their training corpora. Although the natural language inference (NLI) task BIBREF0 , BIBREF1 has proved a good pre-training objective for sentence representations BIBREF2 , commonsense coverage is limited and most models are still end-to-end, relying heavily on word representations to provide background world knowledge.", "Therefore, we propose modeling commonsense knowledge down to word-level analogical reasoning. In this sense, existing analogy benchmarks are lackluster. For Chinese analogy (CA), the simplified Chinese dataset CA8 BIBREF3 and the traditional Chinese dataset CA-Google BIBREF4 translated from the English BIBREF5 contain only a few dozen relations, most of which are either morphological, e.g., a shared prefix, or about named entities, e.g., capital-country.", "However, commonsense knowledge bases such as WordNet BIBREF6 and ConceptNet BIBREF7 have long annotated relations in our lexicon. Among them, E-HowNet BIBREF4 , extended from HowNet BIBREF8 , currently annotates 88K traditional Chinese words with their structured definitions and English translations.", "In this paper, we propose an algorithm for the extraction of accurate commonsense analogies from E-HowNet. We present CA-EHN, the first commonsense analogy dataset containing 85,226 analogies covering 5,563 words and 6,490 commonsense relations." ], [ "E-HowNet 2.0 consists of two major parts: A lexicon of words and concepts with multi-layered annotations, and a taxonomy of concepts with attached word senses." ], [ "The E-HowNet lexicon consists of two types of tokens: 88K words and 4K concepts. Words and concepts are distinguished by whether there is a vertical bar and an English string in the token. For example, UTF8bkai人 (person) and UTF8bkai雞 (chicken) are words, and human $\\vert $ UTF8bkai人 and UTF8bkai雞 $\\vert $ chicken are concepts. In this work, the order of English and Chinese within a concept does not matter. In addition, E-HowNet also contains dozens of relations, which comes fully in English, e.g., or, theme, telic.", "Words and concepts in E-HowNet are annotated with one or more structured definitions consisting of concepts and relations. Table 1 provides several examples with gradually increasing complexity: UTF8bkai人 (person) is defined simply as a human $\\vert $ UTF8bkai人; UTF8bkai駿馬 $\\vert $ ExcellentSteed is defined as a UTF8bkai馬 $\\vert $ horse which has a qualification relation with HighQuality $\\vert $ UTF8bkai優質; UTF8bkai實驗室 (laboratory) is defined as a InstitutePlace $\\vert $ UTF8bkai場所 used for conducting experiments or research. Each concept has only one definition, but a word may have multiple senses and hence multiple definitions. In this work, we use E-HowNet word sense definitions to extract commonsense analogies (Section \"Commonsense Analogy\" ). In addition, word senses are annotated with their English translations, which could be used to transfer our extracted analogies to English multi-word expressions (MWE)." ], [ "Concepts in E-HowNet are additionally organized into a taxonomy. Figure 1 shows the partially expanded tree. Each word sense in the E-HowNet lexicon is attached to a taxon in the tree. In this work, we show that infusing E-HowNet taxonomy tree into word embeddings boosts performance across benchmarks (Section \"Commonsense Infusing\" )." ], [ "We extract commonsense word analogies with a rich coverage of words and relations by comparing word sense definitions. The extraction algorithm is further refined with multiple filters, including putting the linguist in the loop." ], [ "Illustrated in Figure 2 , the extraction algorithm before refinement consists of five steps.", "Definition concept expansion. As many words are synonymous with some concepts, many word senses are defined trivially by one concept. For example, the definition of UTF8bkai駿馬 (excellent steed) is simply {UTF8bkai駿馬 $\\vert $ ExcellentSteed}. The triviality is resolved by expanding such definitions by one layer, e.g., replacing {UTF8bkai駿馬 $\\vert $ ExcellentSteed} with {UTF8bkai馬 $\\vert $ horse:qualification={HighQuality $\\vert $ UTF8bkai優質}}, i.e., the definition of UTF8bkai駿馬 $\\vert $ ExcellentSteed.", "Definition string parsing. We parse each definition into a directed graph. Each node in the graph is either a word, a concept, or a function relation, e.g., or() at the bottom of Table 1 . Each edge is either an attribute relation edge, e.g., :telic=, or a dummy argument edge connecting a function node with one of its argument nodes.", "Definition graph comparison. For every sense pair of two different words in the E-HowNet lexicon, we determine if their definition graphs differ only in one concept node. If they do, the two (word, concept) pairs are analogical to one another. For example, since the graph of UTF8bkai良材 sense#2 (the good timber) and the expanded graph of UTF8bkai駿馬 sense#1 (an excellent steed) differs only in wood $\\vert $ UTF8bkai木 and UTF8bkai馬 $\\vert $ horse, we extract the following concept analogy: UTF8bkai良材:wood $\\vert $ UTF8bkai木=UTF8bkai駿馬:UTF8bkai馬 $\\vert $ horse.", "Left concept expansion. For each concept analogy, we expand the left concept to those words that have one sense defined trivially by it. For example, there is only one word UTF8bkai木頭 (wood) defined as {wood $\\vert $ UTF8bkai木}. Thus after expansion, there is still only one analogy: UTF8bkai良材:UTF8bkai木頭=UTF8bkai駿馬:UTF8bkai馬 $\\vert $ horse. Most of the time, this step yields multiple analogies per concept analogy.", "Right concept expansion. Finally, the remaining concept in each analogy is again expanded to the list of words with a sense trivially defined by it. However, this time we do not use them to form multiple analogies. Instead, the word list is kept as a synset. For example, as UTF8bkai山馬 (orohippus), UTF8bkai馬 (horse), UTF8bkai馬匹 (horses), UTF8bkai駙 (side horse) all have one sense defined as {UTF8bkai馬 $\\vert $ horse}, the final analogy becomes UTF8bkai良材:UTF8bkai木頭=UTF8bkai駿馬:{UTF8bkai山馬,UTF8bkai馬,UTF8bkai馬匹,UTF8bkai駙}. When evaluating embeddings on our benchmark, we consider it a correct prediction as long as it belongs to the synset." ], [ "As the core procedure yields an excessively large benchmark, added to the fact that E-HowNet word sense definitions are sometimes inaccurate, we made several refinements to the extraction process.", "Concrete concepts. As we found that E-HowNet tends to provide more accurate definitions for more concrete concepts, we require words and concepts at every step of the process to be under physical $\\vert $ UTF8bkai物質, which is one layer below thing $\\vert $ UTF8bkai萬物 in Figure 1 . This restriction shrinks the benchmark by half.", "Common words. At every step of the process, we require words to occur at least five times in ASBC 4.0 BIBREF9 , a segmented traditional Chinese corpus containing 10M words from articles between 1981 and 2007. This eliminates uncommon, ancient words or words with synonymous but uncommon, ancient characters. As shown in Table 3 , the benchmark size is significantly reduced by this restriction.", "Linguist checking. We added two data checks into the extraction process between definition graph comparison and left concept expansion. As shown in Table 3 , each of the 36,100 concept analogies were checked by a linguist, leaving 24,439 accurate ones. Furthermore, each synset needed in the 24,439 concept analogies was checked again to remove words that are not actually synonymous with the defining concept. For example, UTF8bkai花草, UTF8bkai山茶花, UTF8bkai薰衣草, UTF8bkai鳶尾花 are all common words with a sense defined trivially as {FlowerGrass $\\vert $ UTF8bkai花草}. However, the last three (camellia, lavender, iris) are not actually synonyms but hyponyms to the concept. This step also helps eliminate words in a synset that are using their rare senses, as we do not expect embeddings to encode those senses without word sense disambiguation (WSD). After the second-pass linguist check, we arrived at 85,226 accurate analogies." ], [ "Table 3 compares Chinese word analogy datasets. Most analogies in existing datasets are morphological (morph.) or named entity (entity) relations. For example, CA8-Morphological BIBREF3 uses 21 shared prefix characters, e.g., UTF8bkai第, to form 2,553 analogies, e.g., UTF8bkai一 : UTF8bkai第一 = UTF8bkai二 : UTF8bkai第二 (one : first = two : second). As for named entities, some 20 word pairs of the capital-country relation can be permuted to form 190 analogies, which require a knowledge base but not commonsense to solve. Only the nature part of CA8 and the man-woman part of CA-Google BIBREF10 contains a handful of relations that requires commonsense world knowledge. In constrast, CA-EHN extracts 85K linguist-checked analogies covering 6,490 concept pairs, e.g., (wood $\\vert $ UTF8bkai木, UTF8bkai馬 $\\vert $ horse). Table 2 shows a small list of the data, covering such diverse domains as onomatopoeia, disability, kinship, and zoology. Full CA-EHN is available in the supplementary materials." ], [ "We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus. The small corpus consists of the traditional Chinese part of Chinese Gigaword BIBREF13 and ASBC 4.0 BIBREF9 . The large corpus additionally includes the Chinese part of Wikipedia.", "Table 4 shows embedding performance across analogy benchmarks. Cov denotes the number of analogies of which the first three words exist in the embedding. Analogies that are not covered are excluded from that evaluation. Still, we observe that the larger corpus yields higher accuracy across all benchmarks. In addition, using SGNS instead of GloVe provides universal boosts in performance.", "While performance on CA-EHN correlates well to that on other benchmarks, commonsense analogies prove to be much more difficult than morphological or named entity analogies for distributed word representations." ], [ "E-HowNet comes in two major parts: a lexicon and a taxonomy. For the lexicon, we have used it to extract the CA-EHN commonsense analogies. For the taxonomy, we experiment infusing its hypo-hyper and same-taxon relations to distributed word representations by retrofitting BIBREF14 . For example, in Figure 1 , the word vector of UTF8bkai空間 is optimized to be close to both its distributed representation and the word vectors of UTF8bkai空隙 (same-taxon) and UTF8bkai事物 (hypo-hyper). Table 4 shows that retrofitting embeddings with E-HowNet taxonomy improves performance on most benchmarks, and all three embeddings have doubled accuracies on CA-EHN. This shows that CA-EHN is a great indicator of how well word representations embed commonsense knowledge." ], [ "We have presented CA-EHN, the first commonsense word analogy dataset, by leveraging word sense definitions in E-HowNet. After linguist checking, we have 85,226 Chinese analogies covering 5,563 words and 6,490 commonsense relations. We anticipate that CA-EHN will become an important benchmark testing how well future embedding methods capture commonsense knowledge, which is crucial for models to generalize inference beyond their training corpora. With translations provided by E-HowNet, Chinese words in CA-EHN can be transferred to English MWEs." ] ], "section_name": [ "Introduction", "E-HowNet", "Lexicon", "Taxonomy", "Commonsense Analogy", "Analogical Word Pairs", "Accurate Analogy", "Analogy Datasets", "Word Embeddings", "Commonsense Infusing", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "30b4603a79367cc4c6f73a0eff7717a0033f2885" ], "answer": [ { "evidence": [ "We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus. The small corpus consists of the traditional Chinese part of Chinese Gigaword BIBREF13 and ASBC 4.0 BIBREF9 . The large corpus additionally includes the Chinese part of Wikipedia." ], "extractive_spans": [], "free_form_answer": "GloVE; SGNS", "highlighted_evidence": [ "We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] } ], "nlp_background": [ "infinity" ], "paper_read": [ "no" ], "question": [ "What types of word representations are they evaluating?" ], "question_id": [ "3b995a7358cefb271b986e8fc6efe807f25d60dc" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "commonsense" ], "topic_background": [ "research" ] }
{ "caption": [ "Table 1: E-HowNet lexicon", "Figure 1: E-HowNet taxonomy", "Table 2: Commonsense analogy", "Figure 2: Commonsense analogy extraction", "Table 3: Analogy benchmarks", "Table 4: Embedding performance" ], "file": [ "2-Table1-1.png", "2-Figure1-1.png", "3-Table2-1.png", "4-Figure2-1.png", "5-Table3-1.png", "5-Table4-1.png" ] }
[ "What types of word representations are they evaluating?" ]
[ [ "1908.07218-Word Embeddings-0" ] ]
[ "GloVE; SGNS" ]
429
1707.05853
Encoding Word Confusion Networks with Recurrent Neural Networks for Dialog State Tracking
This paper presents our novel method to encode word confusion networks, which can represent a rich hypothesis space of automatic speech recognition systems, via recurrent neural networks. We demonstrate the utility of our approach for the task of dialog state tracking in spoken dialog systems that relies on automatic speech recognition output. Encoding confusion networks outperforms encoding the best hypothesis of the automatic speech recognition in a neural system for dialog state tracking on the well-known second Dialog State Tracking Challenge dataset.
{ "paragraphs": [ [ "Spoken dialog systems (SDSs) allow users to naturally interact with machines through speech and are nowadays an important research direction, especially with the great success of automatic speech recognition (ASR) systems BIBREF0 , BIBREF1 . SDSs can be designed for generic purposes, e.g. smalltalk BIBREF2 , BIBREF3 ) or a specific task such as finding restaurants or booking flights BIBREF4 , BIBREF5 . Here, we focus on task-oriented dialog systems, which assist the users to reach a certain goal.", "Task-oriented dialog systems are often implemented in a modular architecture to break up the complex task of conducting dialogs into more manageable subtasks. BIBREF6 describe the following prototypical set-up of such a modular architecture: First, an ASR system converts the spoken user utterance into text. Then, a spoken language understanding (SLU) module extracts the user's intent and coarse-grained semantic information. Next, a dialog state tracking (DST) component maintains a distribution over the state of the dialog, updating it in every turn. Given this information, the dialog policy manager decides on the next action of the system. Finally, a natural language generation (NLG) module forms the system reply that is converted into an audio signal via a text-to-speech synthesizer.", "Error propagation poses a major problem in modular architectures as later components depend on the output of the previous steps. We show in this paper that DST suffers from ASR errors, which was also noted by BIBREF7 . One solution is to avoid modularity and instead perform joint reasoning over several subtasks, e.g. many DST systems directly operate on ASR output and do not rely on a separate SLU module BIBREF8 , BIBREF7 , BIBREF9 . End-to-end systems that can be directly trained on dialogs without intermediate annotations have been proposed for open-domain dialog systems BIBREF3 . This is more difficult to realize for task-oriented systems as they often require domain knowledge and external databases. First steps into this direction were taken by BIBREF5 and BIBREF10 , yet these approaches do not integrate ASR into the joint reasoning process.", "We take a first step towards integrating ASR in an end-to-end SDS by passing on a richer hypothesis space to subsequent components. Specifically, we investigate how the richer ASR hypothesis space can improve DST. We focus on these two components because they are at the beginning of the processing pipeline and provide vital information for the subsequent SDS components. Typically, ASR systems output the best hypothesis or an n-best list, which the majority of DST approaches so far uses BIBREF11 , BIBREF8 , BIBREF7 , BIBREF12 . However, n-best lists can only represent a very limited amount of hypotheses. Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets).", "We adapt recently proposed algorithms to encode lattices with recurrent neural networks (RNNs) BIBREF14 , BIBREF15 to encode cnets via an RNN based on Gated Recurrent Units (GRUs) to perform DST in a neural encoder-classifier system and show that this outperforms encoding only the best ASR hypothesis. We are aware of two DST approaches that incorporate posterior word-probabilities from cnets in addition to features derived from the n-best lists BIBREF11 , BIBREF16 , but to the best of our knowledge, we develop the first DST system directly operating on cnets." ], [ "Our model depicted in Figure FIGREF3 is based on an incremental DST system BIBREF12 . It consists of an embedding layer for the words in the system and user utterances, followed by a fully connected layer composed of Rectified Linear Units (ReLUs) BIBREF17 , which yields the input to a recurrent layer to encode the system and user outputs in each turn with a softmax classifier on top. INLINEFORM0 denotes a weighted sum INLINEFORM1 of the system dialog act INLINEFORM2 and the user utterance INLINEFORM3 , where INLINEFORM4 , and INLINEFORM5 are learned parameters: DISPLAYFORM0 ", "Independent experiments with the 1-best ASR output showed that a weighted sum of the system and user vector outperformed taking only the user vector INLINEFORM0 as in the original model of BIBREF12 . We chose this architecture over other successful DST approaches that operate on the turn-level of the dialogs BIBREF8 , BIBREF7 because it processes the system and user utterances word-by-word, which makes it easy to replace the recurrent layer of the original version with the cnet encoder.", "Our cnet encoder is inspired from two recently proposed algorithms to encode lattices with an RNN with standard memory BIBREF14 and a GRU-based RNN BIBREF15 . In contrast to lattices, every cnet state has only one predecessor and groups together the alternative word hypotheses of a fixed time interval (timestep). Therefore, our cnet encoder is conceptually simpler and easier to implement than the lattice encoders: The recurrent memory only needs to retain the hidden state of the previous timestep, while in the lattice encoder the hidden states of all previously processed lattice states must be kept in memory throughout the encoding process. Following BIBREF15 , we use GRUs as they provide an extended memory compared to plain RNNs. The cnet encoder reads in one timestep at a time as depicted in Figure FIGREF4 . The key idea is to separately process each of the INLINEFORM0 word hypotheses representations INLINEFORM1 in a timestep with the standard GRU to obtain INLINEFORM2 hidden states INLINEFORM3 as defined in Equation ( EQREF7 )-() where INLINEFORM5 , and INLINEFORM6 are the learned parameters of the GRU update, candidate activation and reset gate. To get the hidden state INLINEFORM7 of the timestep, the hypothesis-specific hidden states INLINEFORM8 are combined by a pooling function (Equation ). DISPLAYFORM0 ", "We experiment with the two different pooling functions INLINEFORM0 for the INLINEFORM1 hidden GRU states INLINEFORM2 of the alternative word hypotheses that were used by BIBREF14 :", "Instead of the system output in sentence form we use the dialog act representations in the form of INLINEFORM0 dialog-act, slot, value INLINEFORM1 triples, e.g. `inform food Thai', which contain the same information in a more compact way. Following BIBREF7 , we initialize the word embeddings with 300-dimensional semantically specialized PARAGRAM-SL999 embeddings BIBREF21 . The hyper-parameters for our model are listed in the appendix.", "The cnet GRU subsumes a standard GRU-based RNN if each token in the input is represented as a timestep with a single hypothesis. We adopt this method for the system dialog acts and the baseline model that encode only the best ASR hypothesis." ], [ "In our experiments, we use the dataset provided for the second Dialog State Tracking Challenge (DSTC2) BIBREF22 that consists of user interactions with an SDS in the restaurant domain. It encompasses 1612, 506, 1117 dialogs for training, development and testing, respectively. Every dialog turn is annotated with its dialog state encompassing the three goals for area (7 values), food (93 values) and price range (5 values) and 8 requestable slots, e.g. phone and address. We train on the manual transcripts and the cnets provided with the dataset and evaluate on the cnets.", "Some system dialog acts in the DSTC2 dataset do not correspond to words and thus were not included in the pretrained word embeddings. Therefore, we manually constructed a mapping of dialog acts to words contained in the embeddings, where necessary, e.g. we mapped expl-conf to explicit confirm.", "In order to estimate the potential of improving DST by cnets, we investigated the coverage of words from the manual transcripts for different ASR output types. As shown in Table TABREF10 , cnets improve the coverage of words from the transcripts by more than 15 percentage points over the best hypothesis and more than five percentage points over the 10-best hypotheses.", "However, the cnets provided with the DSTC2 dataset are quite large. The average cnet consists of 23 timesteps with 5.5 hypotheses each, amounting to about 125 tokens, while the average best hypothesis contains four tokens. Manual inspection of the cnets revealed that they contain a lot of noise such as interjections (uh, oh, ...) that never appear in the 10-best lists. The appendix provides an exemplary cnet for illustration. To reduce the processing time and amount of noisy hypotheses, we remove all interjections and additionally experiment with pruning hypotheses with a score below a certain threshold. As shown in Table TABREF10 , this does not discard too many correct hypotheses but markedly reduces the size of the cnet to an average of seven timesteps with two hypotheses." ], [ "We report the joint goals and requests accuracy (all goals or requests are correct in a turn) according to the DSTC2 featured metric BIBREF22 . We train each configuration 10 times with different random seeds and report the average, minimum and maximum accuracy. To study the impact of ASR errors on DST, we trained and evaluated our model on the different user utterance representations provided in the DSTC2 dataset. Our baseline model uses the best hypothesis of the batch ASR system, which has a word error rate (WER) of 34% on the DSTC2 test set. Most DST approaches use the hypotheses of the live ASR system, which has a lower WER of 29%. We train our baseline on the batch ASR outputs as the cnets were also produced by this system. As can be seen from Table TABREF11 , the DST accuracy slightly increases for the higher-quality live ASR outputs. More importantly, the DST performance drastically increases, when we evaluate on the manual transcripts that reflect the true user utterances nearly perfectly." ], [ "Table TABREF13 displays the results for our model evaluated on cnets for increasingly aggressive pruning levels (discarding only interjections, additionally discarding hypotheses with scores below 0.001 and 0.01, respectively). As can be seen, using the full cnet except for interjections does not improve over the baseline. We believe that the share of noisy hypotheses in the DSTC2 cnets is too high for our model to be able to concentrate on the correct hypotheses. However, when pruning low-probability hypotheses both pooling strategies improve over the baseline. Yet, average pooling performs worse for the lower pruning threshold, which shows that the model is still affected by noise among the hypotheses. Conversely, the model can exploit a rich but noisy hypothesis space by weighting the information retained from each hypothesis: Weighted pooling performs better for the lower pruning threshold of 0.001 with which we obtain the highest result overall, improving the joint goals accuracy by 1.6 percentage points compared to the baseline. Therefore, we conclude that is beneficial to use information from all alternatives and not just the highest scoring one but that it is necessary to incorporate the scores of the hypotheses and to prune low-probability hypotheses. Moreover, we see that an ensemble model that averages the predictions of ten cnet models trained with different random seeds also outperforms an ensemble of ten baseline models.", "Although it would be interesting to compare the performance of cnets to full lattices, this is not possible with the original DSTC2 data as there were no lattices provided. This could be investigated in further experiments by running a new ASR system on the DSTC2 dataset to obtain both lattices and cnets. However, these results will not be comparable to previous results on this dataset due to the different ASR output." ], [ "The current state of the art on the DSTC2 dataset in terms of joint goals accuracy is an ensemble of neural models based on hand-crafted update rules and RNNs BIBREF16 . Besides, this model uses a delexicalization mechanism that replaces mentions of slots or values from the DSTC2 ontology by a placeholder to learn value-independent patterns BIBREF8 , BIBREF23 . While this approach is suitable for small domains and languages with a simple morphology such as English, it becomes increasingly difficult to locate words or phrases corresponding to slots or values in wider domains or languages with a rich morphology BIBREF7 and we therefore abstained from delexicalization.", "The best result for the joint requests was obtained by a ranking model based on hand-crafted features, which relies on separate SLU systems besides ASR BIBREF11 . SLU is often cast as sequence labeling problem, where each word in the utterance is annotated with its role in respect to the user's intent BIBREF24 , BIBREF25 , requiring training data with fine-grained word-level annotations in contrast to the turn-level dialog state annotations. Furthermore, a separate SLU component introduces an additional set of parameters to the SDS that has to be learned. Therefore, it has been argued to jointly perform SLU and DST in a single system BIBREF8 , which we follow in this work.", "As a more comparable reference for our set-up, we provide the result of the neural DST system of BIBREF7 that like our approach does not use outputs of a separate SLU system nor delexicalized features. Our ensemble models outperform BIBREF7 for the joint requests but are a bit worse for the joint goals. We stress that our goal was not to reach for the state of the art but show that DST can benefit from encoding cnets." ], [ "As we show in this paper, ASR errors pose a major obstacle to accurate DST in SDSs. To reduce the error propagation, we suggest to exploit the rich ASR hypothesis space encoded in cnets that contain more correct hypotheses than conventionally used n-best lists. We develop a novel method to encode cnets via a GRU-based RNN and demonstrate that this leads to improved DST performance compared to encoding the best ASR hypothesis on the DSTC2 dataset.", "In future experiments, we would like to explore further ways to leverage the scores of the hypotheses, for example by incorporating them as an independent feature rather than a direct weight in the model." ], [ "We thank our anonymous reviewers for their helpful feedback. Our work has been supported by the German Research Foundation (DFG) via a research grant to the project A8 within the Collaborative Research Center (SFB) 732 at the University of Stuttgart." ], [ " " ] ], "section_name": [ "Introduction", "Proposed Model", "Data", "Results and Discussion", "Results of the Model with Cnet Encoder", "Comparison to the State of the Art", "Conclusion", "Acknowledgments", "A. Hyper-Parameters" ] }
{ "answers": [ { "annotation_id": [ "312af031a42cf3f12ee68c1f1f1beeb08bd8324f" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system dialog acts; wti correspond to the word hypotheses in the timesteps of the cnets of the user utterances; sj , uj are the cnet GRU outputs at the end of each system or user utterance." ], "extractive_spans": [], "free_form_answer": "GRU", "highlighted_evidence": [ "FLOAT SELECTED: Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system dialog acts; wti correspond to the word hypotheses in the timesteps of the cnets of the user utterances; sj , uj are the cnet GRU outputs at the end of each system or user utterance." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "79da529505fbed62d0e8f37c1af69a4b0f6c6fb0" ], "answer": [ { "evidence": [ "We take a first step towards integrating ASR in an end-to-end SDS by passing on a richer hypothesis space to subsequent components. Specifically, we investigate how the richer ASR hypothesis space can improve DST. We focus on these two components because they are at the beginning of the processing pipeline and provide vital information for the subsequent SDS components. Typically, ASR systems output the best hypothesis or an n-best list, which the majority of DST approaches so far uses BIBREF11 , BIBREF8 , BIBREF7 , BIBREF12 . However, n-best lists can only represent a very limited amount of hypotheses. Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets)." ], "extractive_spans": [], "free_form_answer": "It is a network used to encode speech lattices to maintain a rich hypothesis space.", "highlighted_evidence": [ "Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "What type of recurrent layers does the model use?", "What is a word confusion network?" ], "question_id": [ "2210178facc0e7b3b6341eec665f3c098abef5ac", "7cf726db952c12b1534cd6c29d8e7dfa78215f9e" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system dialog acts; wti correspond to the word hypotheses in the timesteps of the cnets of the user utterances; sj , uj are the cnet GRU outputs at the end of each system or user utterance.", "Figure 2: Encoding k alternative hypotheses at timestep t of a cnet.", "Table 1: Coverage of words from the manual transcripts in the DSTC2 development set of different batch ASR output types (%). In the pruned cnet interjections and hypotheses with scores below 0.001 were removed.", "Table 2: DSTC2 test set accuracy for 1-best ASR outputs of ten runs with different random seeds in the format average maximumminimum .", "Table 3: DSTC2 test set accuracy of ten runs with different random seeds in the format average maximumminimum . ? denotes a statistically significant higher result than the baseline (p < 0.05, Wilcoxon signed-rank test with Bonferroni correction for ten repeated comparisons). The cnet ensemble corresponds to the best cnet model with pruning threshold 0.001 and weighted pooling.", "Table 4: Cnet from the DSTC2 development set of the session with id voip-db80a9e6df20130328 230354. The transcript is i don’t care, which corresponds the best hypothesis of both ASR systems. Every timestep contains the hypothesis that there is no word (!null)." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "8-Table4-1.png" ] }
[ "What type of recurrent layers does the model use?", "What is a word confusion network?" ]
[ [ "1707.05853-2-Figure1-1.png" ], [ "1707.05853-Introduction-3" ] ]
[ "GRU", "It is a network used to encode speech lattices to maintain a rich hypothesis space." ]
430
1912.01046
TutorialVQA: Question Answering Dataset for Tutorial Videos
Despite the number of currently available datasets on video question answering, there still remains a need for a dataset involving multi-step and non-factoid answers. Moreover, relying on video transcripts remains an under-explored topic. To adequately address this, We propose a new question answering task on instructional videos, because of their verbose and narrative nature. While previous studies on video question answering have focused on generating a short text as an answer, given a question and video clip, our task aims to identify a span of a video segment as an answer which contains instructional details with various granularities. This work focuses on screencast tutorial videos pertaining to an image editing program. We introduce a dataset, TutorialVQA, consisting of about 6,000manually collected triples of (video, question, answer span). We also provide experimental results with several baselines algorithms using the video transcripts. The results indicate that the task is challenging and call for the investigation of new algorithms.
{ "paragraphs": [ [ "Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.", "However, this problem definition of video question answering causes some practical limitations for the following reasons. First, factoid questions are just a small part of what people actually want to ask on video contents. Especially if a short video is given to users, most fragmentary facts within the scope of previous tasks can be easily perceived by themselves even before asking questions. Thus, video question answering is expected to provide answers to more complicated non-factoid questions beyond the simple facts. For example, those questions could be the ones asking about a how procedure as shown in Fig. FIGREF5, and the answers should contain all necessary steps to complete the task.", "Accordingly, the answer format needs to also be improved towards more flexible ways than multiple choice BIBREF1, BIBREF2 or fill-in-the-blank questions BIBREF3, BIBREF4. Although open-ended video question answering BIBREF0, BIBREF2, BIBREF5 has been explored, it still aims to generate just a short word or phrase-level answer, which is not enough to cover various granularities of non-factoid question answering.", "The other issue is that most videos with sufficient amount of information, which are likely to be asked, have much longer lengths than the video clips in the existing datasets. Therefore, the most relevant part of a whole video needs to be determined prior to each answer generation in practice. However, this localization task has been out of scope for previous studies.", "In this work, we propose a new question answering problem for non-factoid questions on instructional videos. According to the nature of the media created for educational purposes, we assume that many answers already exist within the given video contents. Under this assumption, we formulate the problem as a localization task to specify the span of a video segment as the direct answer to a given video and a question, as illustrated in Figure FIGREF1.", "The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6." ], [ "Most relevant to our proposed work is the reading comprehension task, which is a question answering task involving a piece of text such as a paragraph or article. Such datasets for the reading comprehension task, such as SQuAD BIBREF6 based on Wikipedia, TriviaQA BIBREF7 constructed from trivia questions with answer evidence from Wikipedia, or those from Hermann et al. based on CNN and Daily Mail articles BIBREF8 are factoid-based, meaning the answers typically involve a single entity. Differing from video transcripts, the structures of these data sources, namely paragraphs from Wikipedia and news sources, are typically straightforward since they are meant to be read. In contrast, video transcripts originate from spoken dialogue, which can verbose, unstructured, and disconnected. Furthermore, the answers in instructional video transcripts can be longer, spanning multiple sentences if the process is multi-step or even fragmented into multiple segments throughout the video.", "Visual corpora in particular have proven extremely valuable to visual questions-answering tasks BIBREF9, the most similar being MovieQA BIBREF1 and VideoQA BIBREF0. Similar to how our data is generated from video tutorials, the MovieQA and VideoQA corpus is generated from movie scripts and news transcipts, respectively. MovieQA's answers have a shorter span than the answers collected in our corpus, because questions and answer pairs were generated after each paragraph in a movie's plot synopsis BIBREF1. The MovieQA dataset also contains specific annotated answers with incorrect examples for each question. In the VideoQA dataset, questions focus on a single entity, contrary to our instructional video dataset. Although not necessarily a visual question-answering task, the work proposed by BIBREF10 involved answering questions over transcript data. Contrary to our work, Gupta's dataset is not publically available and their examples only showcase factoid-style questions involving single entity answers.", "BIBREF11 focus on aligning a set of instructions to a video of someone carrying out those instructions. In their task, they use the video transcript to represent the video, which they later augment with a visual cue detector on food entities. Their task focuses on procedure-based cooking videos, and contrary to our task is primarily a text alignment task. In our task we aim to answer questions-using the transcripts-on instructional-style videos, in which the answer can involve steps not mentioned in the question." ], [ "In this section, we introduce the TutorialVQA dataset and describe the data collection process ." ], [ "Our dataset consists of 76 tutorial videos pertaining to an image editing software. All of the videos include spoken instructions which are transcribed and manually segmented into multiple segments. Specifically, we asked the annotators to manually divide each video into multiple segments such that each of the segments can serve as an answer to any question. For example, Fig. FIGREF1 shows example segments marked in red (each which are a complete unit as an answer span). Each sentence is associated with the starting and ending time-stamps, which can be used to access the relevant visual information.", "The dataset contains 6,195 non-factoid QA pairs, where the answers are the segments that were manually annotated. Fig. FIGREF5 shows an example of the annotations. video_id can be used to retrieve the video information such as meta information and the transcripts. answer_start and answer_end denote the starting and ending sentence indexes of the answer span. Table. TABREF4 shows the statistics of our dataset, with each answer segment having on average about 6 sentences, showing that our answers are more verbose than those in previous factoid QA tasks." ], [ "We chose videos pertaining to an image editing software because of the complexity and variety of tasks involved. In these videos, a narrator is communicating an overall goal by utilizing an example. For example, in FIGREF1 the video pertains to combining multiple layers into one image. However, throughout the videos multiple subtasks are achieved, such as the opening of multiple images, the masking of images, and the placement of two images side-by-side. These subtasks involve multiple steps and are of interest to us in segmenting the videos. Each segment can be seen as a subtask within a larger video dictating an example. We thus chose these videos because of the amount of procedural information stored in each video for which the user may ask. Though there is only one domain, each video corresponds to a different overall goal." ], [ "We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos.", "Our AMT experiment consisted of two parts. In the first part, we presented the workers with the video content of a segment. For each segment, we asked workers to generate questions that can be answered by the presented segment. We did not limit the number of questions a worker can input to a corresponding segment and encouraged them to input a diverse set of questions which the span can answer. Along with the questions, the workers were also required to provide a justification as to why they made their questions. We manually checked this justification to filter out the questions with poor quality by removing those questions which were unrelated to the video. One initial challenge worth mentioning is that at first some workers input questions they had about the video and not questions which the video could answer. This was solved by providing them with an unrelated example. The second part of the question collection framework consisted of a paraphrasing task. In this task we presented workers with the questions generated by the first task and asked them to write the questions differently while keeping the semantics the same. In this way, we expanded our question dataset. After filtering out the questions with low quality, we collected a total of 6,195 questions.", "It is important to note the differences between our data collection process and the the query generation process employed in the Search and Hyperlinking Task at MediaEval BIBREF12. In the Search and Hyperlinking Task, 30 users were tasked to first browse the collection of videos, select interesting segments with start and end times, and then asked to conjecture questions that they would use on a search query to find the interesting video segments. This was done in order to emulate their thought-proces mechanism. While the nature of their task involves queries relating to the overall videos themselves, hence coming from a video's interestingness, our task involves users already being given a video and formulating questions where the answers themselves come from within a video. By presenting the same video segment to many users, we maintain a consistent set of video segments and extend the possibility to generate a diverse set of question for the same segment." ], [ "Table TABREF12 presents some extracted sample questions from our dataset. The first column corresponds to an AMT generated question, while the second column corresponds to the video ID where the segment can be found. As can be seen in the first two rows, multiple types of questions can be answered within the same video (but different segments). The last two rows display questions which belong to the same segment but correspond to different properties of the same entity, 'crop tool'. Here we observe different types of questions, such as \"why\", \"how\", \"what\", and \"where\", and can see why the answers may involve multiple steps. Some questions that the worked paraphrased were in the \"yes/no\" style, however our answer segments then provide an explanation to these questions.", "Each answer segment was extracted from an image editing tutorial video that involved multiple steps and procedures to produce a final image, which can partially be seen in FIGREF1. The average number of sentences per video was approximately 52, with the maximum number of sentences contained in a video being 187. The sub-tasks in the tutorial include segments (and thus answers) on editing parts of images, instructions on using certain tools, possible actions that can be performed on an image, and identifying the locations of tools and features, with the shortest and longest segment having a span of 1 and 37 sentences respectively, demonstrating the heterogeneity of the answer spans." ], [ "Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset." ], [ "Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.", "Model. The model takes two inputs, a transcript, $\\lbrace s_1, s_2, ... s_n\\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.", "where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.", "where [$\\cdot $,$\\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.", "In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.", "Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.", "Specifically, the predicted span is counted as correct if $|pred_{start} - gt_{start}| + |pred_{end} - gt_{end}| <=$ $k$, where $pred_{start/end}$ and $gt_{start/end}$ indicate the indices of the predicted and ground-truth starting and ending sentences, respectively. We then measure the percentage of correctly predicted questions among the entire test questions." ], [ "We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.", "Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.", "$h^s$ is then re-weighted using attention weights.", "where $\\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.", "During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.", "Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is", "We split the ground-truth dataset to train/dev/test into the ratio of 6/2/2. The resulting size is 3,718 (train), 1,238 (dev) and 1,239 qa pairs (test)." ], [ "We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.", "Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:", "We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:", "selecting the segment with the minimal cosine distance distance to the query.", "Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:" ], [ "Tables TABREF20, TABREF21, TABREF22 show the results. First, the tables show that the two first baselines under-perform for our task. Even with a tolerance window of 6, Baseline1 merely achieves an accuracy of .14. Baseline2, despite being a simpler task, has only an accuracy of .23. Second, while we originally hypothesized that the segment selection task should be easier than the sentence prediction task, Table TABREF21 shows that the task is also challenging. One possible reason is that the segments contained within the same transcript have similar contents, due to the composition of the overall task in each video, and differentiating among them may require a more sophisticated model than just using a sequence model for segment representation. Table TABREF22 shows the accuracy of retrieving the correct segment, for baseline both overall and given that the video selected is within the top 10 videos. While the overall accuracy is only .16, by reducing the search space to 10 relevant videos our accuracy increases to 0.6385. In future iterations, it may then be useful to find better approaches in filtering large paragraphs of text before predicting the correct segment." ], [ "We performed an error analysis on Baseline1's results. We first observe that, in 92% of the errors, the predicted span and the ground-truth overlap. Furthermore, in 56% of the errors, the predicted spans are a subset or superset of the ground-truth spans. This indicates that the model finds the rough answer regions but fails to locate the precise boundaries. To address this issue, we plan on exploring the Pointer-network BIBREF19, which finds an answer span by selecting the boundary sentences. Unlike Baseline1 which avoids an explicit segmentation step, the Pointer-network can explicitly model which sentences are likely to be a boundary sentence. Moreover, the search space of the spans in the Pointer-network is $2n$ where $n$ is the number of sentences, because it selects only two boundary sentences. Note that the search space of Baseline1 is $n^2$. A much smaller search space might improve the accuracy by making the model consider fewer candidates.", "In future work, we also plan to use multi-modal information. While our baselines only used the transcript, complementing the narratives with the visual information may improve the performance, similarly to the text alignment task in BIBREF11." ], [ "We have described the collection, analysis, and baseline results of TutorialVQA, a new type of dataset used to find answer spans in tutorial videos. Our data collection method for question-answer pairs on instructional video can be further adopted to other domains where the answers involve multiple steps and are part of an overall goal, such as cooking or educational videos. We have shown that current baseline models for finding the answer spans are not sufficient for achieving high accuracy and hope that by releasing this new dataset and task, more appropriate question answering models can be developed for question answering on instructional videos." ] ], "section_name": [ "Introduction", "Related Work", "TutorialVQA Dataset", "TutorialVQA Dataset ::: Overview", "TutorialVQA Dataset ::: Basis", "TutorialVQA Dataset ::: Data Collection", "TutorialVQA Dataset ::: Dataset Details", "Baselines", "Baselines ::: Baseline1: Sentence-level prediction", "Baselines ::: Baseline2: Segment retrieval", "Baselines ::: Baseline3: Pipeline Segment retrieval", "Baselines ::: Results", "Discussion and Future Work", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "f466691953c65dd6bef779e24b5b85bb75615e7b" ], "answer": [ { "evidence": [ "Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.", "Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is", "Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:" ], "extractive_spans": [], "free_form_answer": "For sentence-level prediction they used tolerance accuracy, for segment retrieval accuracy and MRR and for the pipeline approach they used overall accuracy", "highlighted_evidence": [ "Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.", "Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. ", "Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "31725daa687440f02f7da29f24a834f7b5159e47" ], "answer": [ { "evidence": [ "The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6." ], "extractive_spans": [], "free_form_answer": "tutorial videos for a photo-editing software", "highlighted_evidence": [ "The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "dfb317082e1d8d9c797aae077dd80334f151aa47" ], "answer": [ { "evidence": [ "Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset.", "Baselines ::: Baseline1: Sentence-level prediction", "Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.", "Model. The model takes two inputs, a transcript, $\\lbrace s_1, s_2, ... s_n\\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.", "where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.", "where [$\\cdot $,$\\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.", "In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.", "Baselines ::: Baseline2: Segment retrieval", "We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.", "Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.", "$h^s$ is then re-weighted using attention weights.", "where $\\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.", "During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.", "Baselines ::: Baseline3: Pipeline Segment retrieval", "We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.", "Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:", "We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:", "selecting the segment with the minimal cosine distance distance to the query." ], "extractive_spans": [], "free_form_answer": "a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval algorithm", "highlighted_evidence": [ "Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks.", "Baselines ::: Baseline1: Sentence-level prediction\nGiven a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.\n\nModel. The model takes two inputs, a transcript, $\\lbrace s_1, s_2, ... s_n\\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.\n\nwhere n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.\n\nwhere [$\\cdot $,$\\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.\n\nIn training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.", "Baselines ::: Baseline2: Segment retrieval\nWe also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.\n\nModel. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.\n\n$h^s$ is then re-weighted using attention weights.\n\nwhere $\\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.\n\nDuring training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.\n\n", "Baselines ::: Baseline3: Pipeline Segment retrieval\nWe construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.\n\nModel. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:\n\nWe then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:\n\nselecting the segment with the minimal cosine distance distance to the query." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "38b128652bdfeaab13ddf672bf97ded5bffb5bce" ], "answer": [ { "evidence": [ "We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos." ], "extractive_spans": [ "a tutorial website about an image editing program " ], "free_form_answer": "", "highlighted_evidence": [ "We downloaded 76 videos from a tutorial website about an image editing program . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "What evaluation metrics were used in the experiment?", "What kind of instructional videos are in the dataset?", "What baseline algorithms were presented?", "What is the source of the triples?" ], "question_id": [ "5bcc12680cf2eda2dd13ab763c42314a26f2d993", "7a53668cf2da4557735aec0ecf5f29868584ebcf", "8051927f914d730dfc61b2dc7a8580707b462e56", "09621c9cd762e1409f22d501513858d67dcd3c7c" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "dataset", "dataset", "dataset", "dataset" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Figure 1: An illustration of our task, where the red in the timeline indicates where answers can be found in a video.", "Table 2: Examples of question variations", "Figure 3: Baseline models for sentence-level prediction and video segment retrieval tasks.", "Table 3: Sentence-level prediction results for Baseline1 with different tolerance window sizes k.", "Table 5: Segment level prediction for Baseline3 pipeline, both overall and given that the video is in the top 10." ], "file": [ "2-Figure1-1.png", "3-Table2-1.png", "5-Figure3-1.png", "5-Table3-1.png", "6-Table5-1.png" ] }
[ "What evaluation metrics were used in the experiment?", "What kind of instructional videos are in the dataset?", "What baseline algorithms were presented?" ]
[ [ "1912.01046-Baselines ::: Baseline2: Segment retrieval-5", "1912.01046-Baselines ::: Baseline3: Pipeline Segment retrieval-4", "1912.01046-Baselines ::: Baseline1: Sentence-level prediction-5" ], [ "1912.01046-Introduction-5" ], [ "1912.01046-Baselines ::: Baseline2: Segment retrieval-1", "1912.01046-Baselines ::: Baseline2: Segment retrieval-3", "1912.01046-Baselines ::: Baseline1: Sentence-level prediction-1", "1912.01046-Baselines ::: Baseline3: Pipeline Segment retrieval-2", "1912.01046-Baselines-0", "1912.01046-Baselines ::: Baseline1: Sentence-level prediction-3", "1912.01046-Baselines ::: Baseline3: Pipeline Segment retrieval-1", "1912.01046-Baselines ::: Baseline2: Segment retrieval-0", "1912.01046-Baselines ::: Baseline3: Pipeline Segment retrieval-0", "1912.01046-Baselines ::: Baseline2: Segment retrieval-4", "1912.01046-Baselines ::: Baseline3: Pipeline Segment retrieval-3", "1912.01046-Baselines ::: Baseline1: Sentence-level prediction-0", "1912.01046-Baselines ::: Baseline1: Sentence-level prediction-2", "1912.01046-Baselines ::: Baseline2: Segment retrieval-2", "1912.01046-Baselines ::: Baseline1: Sentence-level prediction-4" ] ]
[ "For sentence-level prediction they used tolerance accuracy, for segment retrieval accuracy and MRR and for the pipeline approach they used overall accuracy", "tutorial videos for a photo-editing software", "a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval algorithm" ]
433
1910.02339
Natural- to formal-language generation using Tensor Product Representations
Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning.
{ "paragraphs": [ [ "When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information.", "In this paper we propose a novel neural architecture, TP-N2F, to solve natural- to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in natural-language, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) BIBREF6. During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR `binding' (following BIBREF7); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR `unbinding' (following BIBREF8, BIBREF9).", "Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis." ], [ "The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the left-child-of-root position in a tree, or the second-argument-of-$R$ position of a given relation $R$. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols.", "The TPR embedding of a symbol structure is the sum of the embeddings of all its constituents, each constituent comprising a role together with its filler. The embedding of a constituent is constructed from the embedding of a role and the embedding of the filler of that role: these are joined together by the TPR `binding' operation, the tensor (or generalized outer) product $\\otimes $.", "Formally, suppose a symbolic type is defined by the roles $\\lbrace r_i \\rbrace $, and suppose that in a particular instance of that type, ${S}$, role $r_i$ is bound by filler $f_i$. The TPR embedding of ${S}$ is the order-2 tensor = i i i = i i i where $\\lbrace _i \\rbrace $ are vector embeddings of the fillers and $\\lbrace _i \\rbrace $ are vector embeddings of the roles. In Eq. SECREF2, and below, for notational simplicity we conflate order-2 tensors and matrices.", "As a simple example, consider the symbolic type string, and choose roles to be $r_1 = $ first_element, $r_2 = $ second_element, etc. Then in the specific string S = cba, the first role $r_1$ is filled by c, and $r_2$ and $r_3$ by b and a, respectively. The TPR for S is $\\otimes _1 + \\otimes _2 + \\otimes _3$, where $, , $ are the vector embeddings of the symbols a, b, c, and $_i$ is the vector embedding of role $r_i$.", "A TPR scheme for embedding a set of symbol structures is defined by a decomposition of those structures into roles bound to fillers, an embedding of each role as a role vector, and an embedding of each filler as a filler vector. Let the total number of roles and fillers available be $n_{\\mathrm {R}}, n_{\\mathrm {F}}$, respectively. Define the matrix of all possible role vectors to be $\\in ^{d_{\\mathrm {R}}\\times n_{\\mathrm {R}}}$, with column $i$, $[]_{:i} = _i \\in ^{d_{\\mathrm {R}}}$, comprising the embedding of $r_i$. Similarly let $\\in ^{d_{\\mathrm {F}}\\times n_{\\mathrm {F}}}$ be the matrix of all possible filler vectors. The TPR $\\in ^{d_{\\mathrm {F}}\\times d_{\\mathrm {R}}}$. Below, $d_{\\mathrm {R}}, n_{\\mathrm {R}}, d_{\\mathrm {F}}, n_{\\mathrm {F}}$ will be hyper-parameters, while $, $ will be learned parameter matrices.", "Using summation in Eq.SECREF2 to combine the vectors embedding the constituents of a structure risks non-recoverability of those constituents given the embedding $$ of the the structure as a whole. The tensor product is chosen as the binding operation in order to enable recovery of the filler of any role in a structure ${S}$ given its TPR $$. This can be done with perfect precision if the embeddings of the roles are linearly independent. In that case the role matrix $$ has a left inverse $$: $= $. Now define the unbinding (or dual) vector for role $r_j$, $_j$, to be the $j^{{\\mathrm {th}}}$ column of $^\\top $: $U_{:j}^\\top $. Then, since $[]_{ji} = []_{ji} = _{j:} _{:i} = [^\\top _{:j}]^\\top _{:i} =_j^\\top _i = _i^\\top _j$, we have $_i^\\top _j = \\delta _{ji}$. This means that, to recover the filler of $r_j$ in the structure with TPR $$, we can take its tensor inner product (or matrix-vector product) with $_j$: j = [ i i i] j = i i ij = j", "In the architecture proposed here, we will make use of both TPR binding using the tensor product with role vectors $_i$ and TPR unbinding using the tensor inner product with unbinding vectors $_j$. Binding will be used to produce the order-2 tensor $_S$ embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor $$. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another." ], [ "We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure FIGREF3 shows an overview diagram of the TP-N2F model. It depicts the following high-level description.", "As shown in Figure FIGREF3, while the natural-language input is a sequence of words, the output is a sequence of multi-argument relational tuples such as $(R \\hspace{2.84526pt}A_1 \\hspace{2.84526pt}A_2)$, a 3-tuple consisting of a binary relation (or operation) $R$ with its two arguments. The “TP-N2F encoder” uses two LSTMs to produce a pair consisting of a filler vector and a role vector, which are bound together with the tensor product. These tensor products, concatenated, comprise the “context” over which attention will operate in the decoder. The sum of the word-level TPRs, flattened to a vector, is treated as a representation of the entire problem statement; it is fed to the “Reasoning MLP”, which transforms this encoding of the problem into a vector encoding the solution. This is the initial state of the “TP-N2F decoder” attentional LSTM, which outputs at each time step an order-3 tensor representing a relational tuple. To generate a correct tuple from decoder operations, the model must learn to give the order-3 tensor the form of a TPR for a $(R \\hspace{2.84526pt}A_1 \\hspace{2.84526pt}A_2)$ tuple (detailed explanation in Sec. SECREF7). In the following sections, we first introduce the details of our proposed role-level description for N2F tasks, and then present how our proposed TP-N2F model uses TPR binding and unbinding operations to create a neural network implementation of this description of N2F tasks." ], [ "In this section, we propose a role-level description of N2F tasks, which specifies the filler/role structures of the input natural-language symbolic expressions and the output relational representations." ], [ "Instead of encoding each token of a sentence with a non-compositional embedding vector looked up in a learned dictionary, we use a learned role-filler decomposition to compose a tensor representation for each token. Given a sentence $S$ with $n$ word tokens $\\lbrace w^0,w^1,...,w^{n-1}\\rbrace $, each word token $w^t$ is assigned a learned role vector $^t$, soft-selected from the learned dictionary $$, and a learned filler vector $^t$, soft-selected from the learned dictionary $$ (Sec. SECREF2). The mechanism closely follows that of BIBREF7, and we hypothesize similar results: the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively. Then each word token $w^t$ is represented by the tensor product of the role vector and the filler vector: $^t=^t \\otimes ^t$. In addition to the set of all its token embeddings $\\lbrace ^0, \\ldots , ^{n-1} \\rbrace $, the sentence $S$ as a whole is assigned a TPR equal to the sum of the TPR embeddings of all its word tokens: $_S = \\sum _{t=0}^{n-1} ^t$.", "Using TPRs to encode natural language has several advantages. First, natural language TPRs can be interpreted by exploring the distribution of tokens grouped by the role and filler vectors they are assigned by a trained model (as in BIBREF7). Second, TPRs avoid the Bag of Word (BoW) confusion BIBREF8: the BoW encoding of Jay saw Kay is the same as the BoW encoding of Kay saw Jay but the encodings are different with TPR embedding, because the role filled by a symbol changes with its context." ], [ "In this section, we propose a novel recursive role-level description for representing symbolic relational tuples. Each relational tuple contains a relation token and multiple argument tokens. Given a binary relation $rel$, a relational tuple can be written as $(rel \\hspace{2.84526pt}arg_1 \\hspace{2.84526pt}arg_2)$ where $arg_1,arg_2$ indicate two arguments of relation $rel$. Let us adopt the two positional roles, $p_i^{rel} = $ arg$_i$-of-$rel$ for $i=1,2$. The filler of role $p_i^{rel}$ is $arg_i$. Now let us use role decomposition recursively, noting that the role $p_i^{rel}$ can itself be decomposed into a sub-role $p_i = $ arg$_i$-of-$\\underline{\\hspace{5.69054pt}}$ which has a sub-filler $rel$. Suppose that $arg_i, rel, p_i$ are embedded as vectors $_i, , _i$. Then the TPR encoding of $p_i^{rel}$ is $_{rel} \\otimes _i$, so the TPR encoding of filler $arg_i$ bound to role $p_i^{rel}$ is $_i \\otimes (_{rel} \\otimes _i)$. The tensor product is associative, so we can omit parentheses and write the TPR for the formal-language expression, the relational tuple $(rel \\hspace{2.84526pt}arg_1 \\hspace{2.84526pt}arg_2)$, as: = 1 rel 1 + 2 rel 2. Given the unbinding vectors $^{\\prime }_i$ for positional role vectors $_i$ and the unbinding vector $^{\\prime }_{rel}$ for the vector $_{rel}$ that embeds relation $rel$, each argument can be unbound in two steps as shown in Eqs. SECREF7–SECREF7. i' = [ 1 rel 1 + 2 rel 2 ] i' = i rel", "[ i rel ] 'rel = i Here $\\cdot $ denotes the tensor inner product, which for the order-3 $$ and order-1 $^{\\prime }_i$ in Eq. SECREF7 can be defined as $[\\cdot ^{\\prime }_i]_{jk} = \\sum _l []_{jkl} [^{\\prime }_i]_l$; in Eq. SECREF7, $\\cdot $ is equivalent to the matrix-vector product.", "Our proposed scheme can be contrasted with the TPR scheme in which $(rel \\hspace{2.84526pt}arg_1 \\hspace{2.84526pt}arg_2)$ is embedded as $_{rel} \\otimes _1 \\otimes _2$ (e.g., BIBREF11, BIBREF12). In that scheme, an $n$-ary-relation tuple is embedded as an order-($n+1$) tensor, and unbinding an argument requires knowing all the other arguments (to use their unbinding vectors). In the scheme proposed here, an $n$-ary-relation tuple is still embedded as an order-3 tensor: there are just $n$ terms in the sum in Eq. SECREF7, using $n$ position vectors $_1, \\dots , _n$; unbinding simply requires knowing the unbinding vectors for these fixed position vectors.", "In the model, the order-3 tensor $$ of Eq. SECREF7 has a different status than the order-2 tensor $_S$ of Sec. SECREF5. $_S$ is a TPR by construction, whereas $$ is a TPR as a result of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. SECREF7, and performs the unbinding operations which that structure calls for. In Appendix Sec. SECREF65, it is shown that, if unbinding each of a set of roles from some unknown tensor $$ gives a target set of fillers, then $$ must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. SECREF7." ], [ "To generate formal relational tuples from natural-language descriptions, a learning strategy for the mapping between the two structures is particularly important. As shown in (SECREF8), we formalize the learning scheme as learning a mapping function $f_{\\mathrm {mapping}}(\\cdot )$, which, given a structural representation of the natural-language input, $_S$, outputs a tensor $_F$ from which the structural representation of the output can be generated. At the role level of description, there's nothing more to be said about this mapping; how it is modeled at the neural network level is discussed in Sec. SECREF10. F = fmapping(S)" ], [ "As shown in Figure FIGREF3, the TP-N2F model is implemented with three steps: encoding, mapping, and decoding. The encoding step is implemented by the TP-N2F natural-language encoder (TP-N2F Encoder), which takes the sequence of word tokens as inputs, and encodes them via TPR binding according to the TP-N2F role scheme for natural-language input given in Sec. SECREF5. The mapping step is implemented by an MLP called the Reasoning Module, which takes the encoding produced by the TP-N2F Encoder as input. It learns to map the natural-language-structure encoding of the input to a representation that will be processed under the assumption that it follows the role scheme for output relational-tuples specified in Sec. SECREF7: the model needs to learn to produce TPRs such that this processing generates correct output programs. The decoding step is implemented by the TP-N2F relational tuples decoder (TP-N2F Decoder), which takes the output from the Reasoning Module (Sec. SECREF8) and decodes the target sequence of relational tuples via TPR unbinding. The TP-N2F Decoder utilizes an attention mechanism over the individual-word TPRs $^t$ produced by the TP-N2F Encoder. The detailed implementations are introduced below." ], [ "The TP-N2F encoder follows the role scheme in Sec. SECREF5 to encode each word token $w^t$ by soft-selecting one of $n_{\\mathrm {F}}$ fillers and one of $n_{\\mathrm {R}}$ roles. The fillers and roles are embedded as vectors. These embedding vectors, and the functions for selecting fillers and roles, are learned by two LSTMs, the Filler-LSTM and the Role-LSTM. (See Figure FIGREF11.) At each time-step $t$, the Filler-LSTM and the Role-LSTM take a learned word-token embedding $^t$ as input. The hidden state of the Filler-LSTM, $_{\\mathrm {F}}^t$, is used to compute softmax scores $u_k^{\\mathrm {F}}$ over $n_{\\mathrm {F}}$ filler slots, and a filler vector $^{t} = ^{\\mathrm {F}}$ is computed from the softmax scores (recall from Sec. SECREF2 that $$ is the learned matrix of filler vectors). Similarly, a role vector is computed from the hidden state of the Role-LSTM, $_{\\mathrm {R}}^t$. $f_{\\mathrm {F}}$ and $f_{\\mathrm {R}}$ denote the functions that generate $^{t}$ and $^t$ from the hidden states of the two LSTMs. The token $w^t$ is encoded as $^t$, the tensor product of $^{t}$ and $^t$. $^t$ replaces the hidden vector in each LSTM and is passed to the next time step, together with the LSTM cell-state vector $^t$: see (SECREF10)–(SECREF10). After encoding the whole sequence, the TP-N2F encoder outputs the sum of all tensor products $\\sum _t ^t$ to the next module. We use an MLP, called the Reasoning MLP, for TPR mapping; it takes an order-2 TPR from the encoder and maps it to the initial state of the decoder. Detailed equations and implementation are provided in Sec. SECREF22 of the Appendix. Ft = fFiller-LSTM(t,t-1, Ft-1) Rt = fRole-LSTM(t,t-1, Rt-1)", "t = t t = fF(Ft) fR(Rt)" ], [ "The TP-N2F Decoder is an RNN that takes the output from the reasoning MLP as its initial hidden state for generating a sequence of relational tuples (Figure FIGREF13). This decoder contains an attentional LSTM called the Tuple-LSTM which feeds an unbinding module: attention operates on the context vector of the encoder, consisting of all individual encoder outputs $\\lbrace ^t \\rbrace $. The hidden-state $$ of the Tuple-LSTM is treated as a TPR of a relational tuple and is unbound to a relation and arguments. During training, the Tuple-LSTM needs to learn a way to make $$ suitably approximate a TPR. At each time step $t$, the hidden state $^t$ of the Tuple-LSTM with attention (The version in BIBREF13) (SECREF12) is fed as input to the unbinding module, which regards $^t$ as if it were the TPR of a relational tuple with $m$ arguments possessing the role structure described in Sec. SECREF7: $^t \\approx \\sum _{i=1}^{m} _{i}^t \\otimes _{rel}^t \\otimes _i$. (In Figure FIGREF13, the assumed hypothetical form of $^t$, as well as that of $_i^t$ below, is shown in a bubble with dashed border.) To decode a binary relational tuple, the unbinding module decodes it from $^t$ using the two steps of TPR unbinding given in (SECREF7)–(SECREF7). The positional unbinding vectors $^{\\prime }_{i}$ are learned during training and shared across all time steps. After the first unbinding step (SECREF7), i.e., the inner product of $^t$ with $^{\\prime }_i$, we get tensors $_{i}^t$ (SECREF12). These are treated as the TPRs of two arguments $_i^t$ bound to a relation $_{rel}^t$. A relational unbinding vector $_{rel}^{\\prime t}$ is computed by a linear function from the sum of the $_{i}^t$ and used to compute the inner product with each $_i^t$ to yield $_i^t$, which are treated as the embedding of argument vectors (SECREF12). Based on the TPR theory, $_{rel}^{\\prime t}$ is passed to a linear function to get $_{rel}^t$ as the embedding of a relation vector. Finally, the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected. (Detailed equations are in Appendix Sec. SECREF42) t = Atten(fTuple-LSTM(relt,arg1t,arg2t,t-1,ct-1),[0,...,n-1])", "1t = t 1' 2t = t 2'", "rel't = flinear(1t + 2t) 1t = 1t rel't 2t = 2t rel't" ], [ "During inference time, natural language questions are encoded via the encoder and the Reasoning MLP maps the output of the encoder to the input of the decoder. We use greedy decoding (selecting the most likely class) to decode one relation and its arguments. The relation and argument vectors are concatenated to construct a new vector as the input for the Tuple-LSTM in the next step.", "TP-N2F is trained using back-propagation BIBREF14 with the Adam optimizer BIBREF15 and teacher-forcing. At each time step, the ground-truth relational tuple is provided as the input for the next time step. As the TP-N2F decoder decodes a relational tuple at each time step, the relation token is selected only from the relation vocabulary and the argument tokens from the argument vocabulary. For an input ${\\mathcal {I}}$ that generates $N$ output relational tuples, the loss is the sum of the cross entropy loss ${\\mathcal {L}}$ between the true labels $L$ and predicted tokens for relations and arguments as shown in (SECREF14). LI = i=0N-1L(reli, Lreli) + i=0N-1j=12L(argji, Largji)" ], [ "The proposed TP-N2F model is evaluated on two N2F tasks, generating operation sequences to solve math problems and generating Lisp programs. In both tasks, TP-N2F achieves state-of-the-art performance. We further analyze the behavior of the unbinding relation vectors in the proposed model. Results of each task and the analysis of the unbinding relation vectors are introduced in turn. Details of experiments and datasets are described in Sec. SECREF20 in the Appendix." ], [ "Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline." ], [ "Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations." ], [ "To interpret the structure learned by the model, we extract the trained unbinding relation vectors from the TP-N2F Decoder and reduce the dimension of vectors via Principal Component Analysis. K-means clustering results on the average vectors are presented in Figure FIGREF71 and Figure FIGREF72 (in Appendix A.6). Results show that unbinding vectors for operators or functions with similar semantics tend to be close to each other. For example, with 5 clusters in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together, and operators related to square or volume of geometry are clustered together. With 4 clusters in the AlgoLisp dataset, partial/lambda functions and sort functions are in one cluster, and string processing functions are clustered together. Note that there is no direct supervision to inform the model about the nature of the operations, and the TP-N2F decoder has induced this role structure using weak supervision signals from question/operation-sequence-answer pairs. More clustering results are presented in the Appendix A.6." ], [ "N2F tasks include many different subtasks such as symbolic reasoning or semantic parsing BIBREF19, BIBREF20, BIBREF21, BIBREF16, BIBREF17, BIBREF18. These tasks require models with strong structure-learning ability. TPR is a promising technique for encoding symbolic structural information and modeling symbolic reasoning in vector space. TPR binding has been used for encoding and exploring grammatical structural information of natural language BIBREF7, BIBREF9. TPR unbinding has also been used to generate natural language captions from images BIBREF8. Some researchers use TPRs for modeling deductive reasoning processes both on a rule-based model and deep learning models in vector space BIBREF22, BIBREF11, BIBREF12. However, none of these previous models takes advantage of combining TPR binding and TPR unbinding to learn structure representation mappings explicitly, as done in our model. Although researchers are paying increasing attention to N2F tasks, most of the proposed models either do not encode structural information explicitly or are specialized to particular tasks. Our proposed TP-N2F neural model can be applied to many tasks." ], [ "In this paper we propose a new scheme for neural-symbolic relational representations and a new architecture, TP-N2F, for formal-language generation from natural-language descriptions. To our knowledge, TP-N2F is the first model that combines TPR binding and TPR unbinding in the encoder-decoder fashion. TP-N2F achieves the state-of-the-art on two instances of N2F tasks, showing significant structure learning ability. The results show that both the TP-N2F encoder and the TP-N2F decoder are important for improving natural- to formal-language generation. We believe that the interpretation and symbolic structure encoding of TPRs are a promising direction for future work. We also plan to combine large-scale deep learning models such as BERT with TP-N2F to take advantage of structure learning for other generation tasks." ], [ "In this section, we present details of the experiments of TP-N2F on the two datasets. We present the implementation of TP-N2F on each dataset.", "The MathQA dataset consists of about 37k math word problems ((80/12/8)% training/dev/testing problems), each with a corresponding list of multi-choice options and an straight-line operation sequence program to solve the problem. An example from the dataset is presented in the Appendix A.4. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed to generate the solution for the given math problem. We use the execution script from BIBREF16 to execute the generated operation sequence and compute the multi-choice accuracy for each problem. During our experiments we observed that there are about 30% noisy examples (on which the execution script fails to get the correct answer on the ground truth program). Therefore, we report both execution accuracy (the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly).", "The AlgoLisp dataset BIBREF17 is a program synthesis dataset, which has 79k/9k/10k training/dev/testing samples. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of commands from leaves to root and (as in MathQA) use the symbol $\\#_i$ to indicate the result of the $i^{\\mathrm {th}}$ command (generated previously by the model). A dataset sample with our parsed command sequence is presented in the Appendix A.4. AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: accuracy of passing all test cases (Acc), accuracy of passing 50% of test cases (50p-Acc), and accuracy of generating an exactly matched program (M-Acc). AlgoLisp has about 10% noise data (where the execution script fails to pass all test cases on the ground truth program), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed).", "We use $d_{\\mathrm {R}}, n_{\\mathrm {R}}, d_{\\mathrm {F}}, n_{\\mathrm {F}}$ to indicate the TP-N2F encoder hyperparameters, the dimension of role vectors, the number of roles, the dimension of filler vectors and the number of fillers. $d_{Rel}, d_{Arg},d_{Pos}$ indicate the TP-N2F decoder hyper-parameters, the dimension of relation vectors, the dimension of argument vectors, and the dimension of position vectors.", "In the experiment on the MathQA dataset, we use $n_{\\mathrm {F}}= 150$, $n_{\\mathrm {R}}= 50$, $d_{\\mathrm {F}}= 30$, $d_{\\mathrm {R}}= 20$, $d_{Rel} = 20$, $d_{Arg} = 10$, $d_{Pos} = 5$ and we train the model for 60 epochs with learning rate 0.00115. The reasoning module only contains one layer. As most of the math operators in this dataset are binary, we replace all operators taking three arguments with a set of binary operators based on hand-encoded rules, and for all operators taking one argument, a padding symbol is appended. For the baseline SEQ2PROG-orig, TP2LSTM and LSTM2TP, we use hidden size 100, single-direction, one-layer LSTM. For the SEQ2PROG-best, we performed a hyperparameter search on the hidden size for both encoder and decoder; the best score is reported.", "In the experiment on the AlgoLisp dataset, we use $n_{\\mathrm {F}}= 150$, $n_{\\mathrm {R}}= 50$, $d_{\\mathrm {F}}= 30$, $d_{\\mathrm {R}}= 30$, $d_{Rel} = 30$, $d_{Arg} = 20$, $d_{Pos} = 5$ and we train the model for 50 epochs with learning rate 0.00115. We also use one-layer in the reasoning module like in MathQA. For this dataset, most function calls take three arguments so we simply add padding symbols for those functions with fewer than three arguments." ], [ "Filler-LSTM in TP-N2F encoder", "This is a standard LSTM, governed by the equations:", "$\\varphi , \\tanh $ are the logistic sigmoid and tanh functions applied elementwise. $\\flat $ flattens (reshapes) a matrix in $^{d_{\\mathrm {F}} \\times d_{\\mathrm {R}}}$ into a vector in $^{d_{\\mathrm {T}}}$, where $d_{\\mathrm {T}} = d_{\\mathrm {F}} d_{\\mathrm {R}}$. $\\odot $ is elementwise multiplication. The variables have the following dimensions: ft, ft, ft, ft, ft, ft, ff, fg, fi, fo, ♭(t-1) RdT", "wt Rd", "ff, fg, fi, fo RdT d", "ff, fg, fi, fo RdT dT", "Filler vector", "The filler vector for input token $w^t$ is $^t$, defined through an attention vector over possible fillers, $_{\\mathrm {f}}^t$:", "($W_{\\mathrm {f}}$ is the same as $$ of Sec. SECREF2.) The variables' dimensions are: fa RnF dT", "ft RnF", "f RdF nF", "t RdF $T$ is the temperature factor, which is fixed at 0.1.", "Role-LSTM in TP-N2F encoder", "Similar to the Filler-LSTM, the Role-LSTM is also a standard LSTM, governed by the equations:", "The variable dimensions are: rt, rt, rt, rt, rt, rt, rf, rg, ri, ro, ♭(t-1) RdT", "wt Rd", "rf, rg, ri, ro RdT d", "rf, rg, ri, ro RdT dT", "Role vector", "The role vector for input token $w^t$ is determined analogously to its filler vector:", "The dimensions are: ra RnR dT", "rt RnR", "r RdR nR", "t RdR", "Binding", "The TPR for the filler/role binding for token $w^t$ is then:", "where t RdR dF" ], [ "$^0 \\in \\mathbb {R}^{d_{\\mathrm {H}}}$, where $d_{\\mathrm {H}} = d_{\\mathrm {A}}, d_{\\mathrm {O}}, d_{\\mathrm {P}}$ are dimension of argument vector, operator vector and position vector. $f_{\\mathrm {mapping}}$ is implemented with a MLP (linear layer followed by a tanh) for mapping the $_t \\in \\mathbb {R}^{d_{\\mathrm {T}}}$ to the initial state of decoder $^0$." ], [ "Tuple-LSTM", "The output tuples are also generated via a standard LSTM:", "Here, $\\gamma $ is the concatenation function. $_{Rel}^{t-1}$ is the trained embedding vector for the Relation of the input binary tuple, $_{Arg1}^{t-1}$ is the embedding vector for the first argument and $_{Arg2}^{t-1}$ is the embedding vector for the second argument. Then the input for the Tuple LSTM is the concatenation of the embedding vectors of relation and arguments, with dimension $d_{\\mathrm {dec}}$. t, t, t, t, t, inputt, f, g, i, o, ♭(t-1) RdH", "dt Rddec", "f, g, i, o RdH ddec", "f, g, i, o RdH dH", "t RdH ${\\mathrm {Atten}}$ is the attention mechanism used in BIBREF13, which computes the dot product between $_{\\mathrm {input}}^t$ and each $_{t^{\\prime }}$. Then a linear function is used on the concatenation of $_{\\mathrm {input}}^t$ and the softmax scores on all dot products to generate $^t$. The following equations show the attention mechanism:", "${\\mathrm {score}}$ is the score function of the attention. In this paper, the score function is dot product. T RdH n", "t Rn", "t RdH", "RdH (dT+n)", "Unbinding", "At each timestep $t$, the 2-step unbinding process described in Sec. SECREF7 operates first on an encoding of the triple as a whole, $$, using two unbinding vectors $_i^{\\prime }$ that are learned but fixed for all tuples. This first unbinding gives an encoding of the two operator-argument bindings, $_i$. The second unbinding operates on the $_i$, using a generated unbinding vector for the operator, $_{rel}^{\\prime }$, giving encodings of the arguments, $_i$. The generated unbinding vector for the operator, $^{\\prime }$, and the generated encodings of the arguments, $_i$, each produce a probability distribution over symbolic operator outputs $Rel$ and symbolic argument outputs $Arg_i$; these probabilities are used in the cross-entropy loss function. For generating a single symbolic output, the most-probable symbols are selected.", "The dimensions are: rel't RdO", "1t, 2t RdA", "'1, '2 RdP", "1t, 2t RdA dO", "dual RdH", "rt RnO dO", "at RnA dA", "rt RnR", "a1t, a2t RnA" ], [ "Here we show that, if learning is successful, the order-3 tensor $$ that each iteration of the decoder's Tuple LSTM feeds to the decoder's Unbinding Module (Figure FIGREF13) will be a TPR of the form assumed in Eq. SECREF7, repeated here: = j j rel j. The operations performed by the decoder are given in Eqs. SECREF7–SECREF7, and Eqs. SECREF12–SECREF12, rewritten here: i' = i", "i rel' = i This is the standard TPR unbinding operation, used recursively: first with the unbinding vectors for positions, $_i^{\\prime }$, then with the unbinding vector for the operator, $_{rel}^{\\prime }$. It therefore suffices to analyze a single unbinding; the result can then be used recursively. This in effect reduces the problem to the order-2 case. What we will show is: given a set of unbinding vectors $\\lbrace _i^{\\prime } \\rbrace $ which are dual to a set of role vectors $\\lbrace _i \\rbrace $, with $i$ ranging over some index set $I$, if $$ is an order-2 tensor such that 'i = i, i I then = i I i i + TPR + for some tensor $$ that annihilates all the unbinding vectors: 'i = 0, i I. If learning is successful, the processing in the decoder will generate the target relational tuple $(R, A_1, A_2)$ by obeying Eq. SECREF65 in the first unbinding, where we have $_i^{\\prime } = _i^{\\prime }, _i = _i, I = \\lbrace 1, 2\\rbrace $, and obeying Eq. SECREF65 in the second unbinding, where we have $_i^{\\prime } = _{rel}^{\\prime }, _i^{\\prime } = _i$, with $I =$ the set containing only the null index.", "Treat rank-2 tensors as matrices; then unbinding is simply matrix-vector multiplication. Assume the set of unbinding vectors is linearly independent (otherwise there would in general be no way to satisfy Eq. SECREF65 exactly, contrary to assumption). Then expand the set of unbinding vectors, if necessary, into a basis $\\lbrace ^{\\prime }_k\\rbrace _{k \\in K \\supseteq I}$. Find the dual basis, with $_k$ dual to $^{\\prime }_k$ (so that $_l^\\top _j^{\\prime } = \\delta _{lj}$). Because $\\lbrace ^{\\prime }_k\\rbrace _{k \\in K}$ is a basis, so is $\\lbrace _k\\rbrace _{k \\in K}$, so any matrix $$ can be expanded as $= \\sum _{k \\in K} _k _k^{\\top }$. Since $^{\\prime }_i = _i, \\forall i \\in I$ are the unbinding conditions (Eq. SECREF65), we must have $_i = _i, i \\in I$. Let $_{{\\mathrm {TPR}}} \\equiv \\sum _{i \\in I} _i _i^{\\top }$. This is the desired TPR, with fillers $_i$ bound to the role vectors $_i$ which are the duals of the unbinding vectors $_i^{\\prime }$ ($i \\in I$). Then we have $= _{{\\mathrm {TPR}}} + $ (Eq. SECREF65) where $\\equiv \\sum _{j \\in K, j \\notin I} _j _j^{\\top }$; so $_i^{\\prime } = {\\mathbf {0}}, i \\in I$ (Eq. SECREF65). Thus, if training is successful, the model must have learned how to feed the decoder with order-3 TPRs with the structure posited in Eq. SECREF65.", "The argument so far addresses the case where the unbinding vectors are linearly independent, making it possible to satisfy Eq. SECREF65 exactly. In relatively high-dimensional vector spaces, it will often happen that even when the number of unbinding vectors exceeds the dimension of their space by a factor of 2 or 3 (which applies to the TP-N2F models presented here), there is a set of role vectors $\\lbrace _k \\rbrace _{k \\in K}$ approximately dual to $\\lbrace ^{\\prime }_k \\rbrace _{k \\in K}$, such that $_l^\\top _j^{\\prime } = \\delta _{lj} \\hspace{2.84526pt}\\forall l, j \\in K$ holds to a good approximation. (If the distribution of normalized unbinding vectors is approximately uniform on the unit sphere, then choosing the approximate dual vectors to equal the unbinding vectors themselves will do, since they will be nearly orthonormal BIBREF10. If the $\\lbrace ^{\\prime }_k \\rbrace _{k \\in K}$ are not normalized, we just rescale the role vectors, choosing $_k = _k^{\\prime } / \\Vert _k^{\\prime } \\Vert ^2$.) When the number of such role vectors exceeds the dimension of the embedding space, they will be overcomplete, so while it is still true that any matrix $$ can be expanded as above ($= \\sum _{k \\in K} _k _k^{\\top }$), this expansion will no longer be unique. So while it remains true that $$ a TPR, it is no longer uniquely decomposable into filler/role pairs. The claim above does not claim uniqueness in this sense, and remains true.)" ], [ "Problem: The present polulation of a town is 3888. Population increase rate is 20%. Find the population of town after 1 year?", "Options: a) 2500, b) 2100, c) 3500, d) 3600, e) 2700", "Operations: multiply(n0,n1), divide(#0,const-100), add(n0,#1)" ], [ "Problem: Consider an array of numbers and a number, decrements each element in the given array by the given number, what is the given array?", "Program Nested List: (map a (partial1 b –))", "Command-Sequence: (partial1 b –), (map a #0)" ], [ "In this section, we display some generated samples from the two datasets, where the TP-N2F model generates correct programs but LSTM-Seq2Seq does not.", "Question: A train running at the speed of 50 km per hour crosses a post in 4 seconds. What is the length of the train?", "TP-N2F(correct):", "(multiply,n0,const1000) (divide,#0,const3600) (multiply,n1,#1)", "LSTM(wrong):", "(multiply,n0,const0.2778) (multiply,n1,#0)", "Question: 20 is subtracted from 60 percent of a number, the result is 88. Find the number?", "TP-N2F(correct):", "(add,n0,n2) (divide,n1,const100) (divide,#0,#1)", "LSTM(wrong):", "(add,n0,n2) (divide,n1,const100) (divide,#0,#1) (multiply,#2,n3) (subtract,#3,n0)", "Question: The population of a village is 14300. It increases annually at the rate of 15 percent. What will be its population after 2 years?", "TP-N2F(correct):", "(divide,n1,const100) (add,#0,const1) (power,#1,n2) (multiply,n0,#2)", "LSTM(wrong):", "(multiply,const4,const100) (sqrt,#0)", "Question: There are two groups of students in the sixth grade. There are 45 students in group a, and 55 students in group b. If, on a particular day, 20 percent of the students in group a forget their homework, and 40 percent of the students in group b forget their homework, then what percentage of the sixth graders forgot their homework?", "TP-N2F(correct):", "(add,n0,n1) (multiply,n0,n2) (multiply,n1,n3) (divide,#1,const100) (divide,#2,const100) (add,#3,#4) (divide,#5,#0) (multiply,#6,const100)", "LSTM(wrong):", "(multiply,n0,n1) (subtract,n0,n1) (divide,#0,#1)", "Question: 1 divided by 0.05 is equal to", "TP-N2F(correct):", "(divide,n0,n1)", "LSTM(wrong):", "(divide,n0,n1) (multiply,n2,#0)", "Question: Consider a number a, compute factorial of a", "TP-N2F(correct):", "( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( invoke1,#5,a )", "LSTM(wrong):", "( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( len,a ) ( invoke1,#5,#6 )", "Question: Given an array of numbers and numbers b and c, add c to elements of the product of elements of the given array and b, what is the product of elements of the given array and b?", "TP-N2F(correct):", "( partial, b,* ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )", "LSTM(wrong):", "( partial1,b,+ ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )", "Question: You are given an array of numbers a and numbers b , c and d , let how many times you can replace the median in a with sum of its digits before it becomes a single digit number and b be the coordinates of one end and c and d be the coordinates of another end of segment e , your task is to find the length of segment e rounded down", "TP-N2F(correct):", "( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 ) ( - #13 c ) ( digits arg1 ) ( len #15 ) ( == #16 1 ) ( digits arg1 ) ( reduce #18 0 + ) ( self #19 ) ( + 1 #20 ) ( if #17 0 #21 ) ( lambda1 #22 ) ( sort a ) ( len a ) ( / #25 2 ) ( deref #24 #26 ) ( invoke1 #23 #27 ) ( - #28 c ) ( * #14 #29 ) ( - b d ) ( - b d ) ( * #31 #32 ) ( + #30 #33 ) ( sqrt #34 ) ( floor #35 )", "LSTM(wrong): ( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 c ) ( - #13 ) ( - b d ) ( - b d ) ( * #15 #16 ) ( * #14 #17 ) ( + #18 ) ( sqrt #19 ) ( floor #20 )", "Question: Given numbers a , b , c and e , let d be c , reverse digits in d , let a and the number in the range from 1 to b inclusive that has the maximum value when its digits are reversed be the coordinates of one end and d and e be the coordinates of another end of segment f , find the length of segment f squared", "TP-N2F(correct):", "( digits c ) ( reverse #0 ) ( * arg1 10 ) ( + #2 arg2 ) ( lambda2 #3 ) ( reduce #1 0 #4 ) ( - a #5 ) ( digits c ) ( reverse #7 ) ( * arg1 10 ) ( + #9 arg2 ) ( lambda2 #10 ) ( reduce #8 0 #11 ) ( - a #12 ) ( * #6 #13 ) ( + b 1 ) ( range 0 #15 ) ( digits arg1 ) ( reverse #17 ) ( * arg1 10 ) ( + #19 arg2 ) ( lambda2 #20 ) ( reduce #18 0 #21 ) ( digits arg2 ) ( reverse #23 ) ( * arg1 10 ) ( + #25 arg2 ) ( lambda2 #26 ) ( reduce #24 0 #27 ) ( > #22 #28 ) ( if #29 arg1 arg2 ) ( lambda2 #30 ) ( reduce #16 0 #31 ) ( - #32 e ) ( + b 1 ) ( range 0 #34 ) ( digits arg1 ) ( reverse #36 ) ( * arg1 10 ) ( + #38 arg2 ) ( lambda2 #39 ) ( reduce #37 0 #40 ) ( digits arg2 ) ( reverse #42 ) ( * arg1 10 ) ( + #44 arg2 ) ( lambda2 #45 ) ( reduce #43 0 #46 ) ( > #41 #47 ) ( if #48 arg1 arg2 ) ( lambda2 #49 ) ( reduce #35 0 #50 ) ( - #51 e ) ( * #33 #52 ) ( + #14 #53 )", "LSTM(wrong):", "( - a d ) ( - a d ) ( * #0 #1 ) ( digits c ) ( reverse #3 ) ( * arg1 10 ) ( + #5 arg2 ) ( lambda2 #6 ) ( reduce #4 0 #7 ) ( - #8 e ) ( + b 1 ) ( range 0 #10 ) ( digits arg1 ) ( reverse #12 ) ( * arg1 10 ) ( + #14 arg2 ) ( lambda2 #15 ) ( reduce #13 0 #16 ) ( digits arg2 ) ( reverse #18 ) ( * arg1 10 ) ( + #20 arg2 ) ( lambda2 #21 ) ( reduce #19 0 #22 ) ( > #17 #23 ) ( if #24 arg1 arg2 ) ( lambda2 #25 ) ( reduce #11 0 #26 ) ( - #27 e ) ( * #9 #28 ) ( + #2 #29 )" ], [ "We run K-means clustering on both datasets with $k = 3,4,5,6$ clusters and the results are displayed in Figure FIGREF71 and Figure FIGREF72. As described before, unbinding-vectors for operators or functions with similar semantics tend to be closer to each other. For example, in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together at middle, and operators related to geometry such as square or volume are clustered together at bottom left. In AlgoLisp dataset, basic arithmetic functions are clustered at middle, and string processing functions are clustered at right." ] ], "section_name": [ "INTRODUCTION", "Background: Review of Tensor-Product Representation", "TP-N2F Model", "TP-N2F Model ::: Role-level description of N2F tasks", "TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for natural-language input", "TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for relational representations", "TP-N2F Model ::: Role-level description of N2F tasks ::: The TP-N2F Scheme for Learning the input-output mapping", "TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation", "TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F natural-language Encoder", "TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F Relational-Tuple Decoder", "TP-N2F Model ::: Inference and The Learning Strategy of the TP-N2F Model", "EXPERIMENTS", "EXPERIMENTS ::: Generating operation sequences to solve math problems", "EXPERIMENTS ::: Generating program trees from natural-language descriptions", "EXPERIMENTS ::: Interpretation of learned structure", "Related work", "CONCLUSION AND FUTURE WORK", "Appendix ::: Implementations of TP-N2F for experiments", "Appendix ::: Detailed equations of TP-N2F ::: TP-N2F encoder", "Appendix ::: Detailed equations of TP-N2F ::: Structure Mapping", "Appendix ::: Detailed equations of TP-N2F ::: TP-N2F decoder", "Appendix ::: The tensor that is input to the decoder's Unbinding Module is a TPR", "Appendix ::: Dataset samples ::: Data sample from MathQA dataset", "Appendix ::: Dataset samples ::: Data sample from AlgoLisp dataset", "Appendix ::: Generated programs comparison", "Appendix ::: Unbinding relation vector clustering" ] }
{ "answers": [ { "annotation_id": [ "c37a3b9cec4a5d805f46b1f9775bec7c0d2edcda" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations." ], "extractive_spans": [], "free_form_answer": "Full Testing Set accuracy: 84.02\nCleaned Testing Set accuracy: 93.48", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "33571dbc5f7a6d15bea77b686d9b3b9cdc0c16a9" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations." ], "extractive_spans": [], "free_form_answer": "Full Testing Set Accuracy: 84.02\nCleaned Testing Set Accuracy: 93.48", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results of AlgoLisp dataset", "As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "8732e4ab71c2d17f172863f97f901376dffb0ac9" ], "answer": [ { "evidence": [ "Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline.", "FLOAT SELECTED: Table 1: Results on MathQA dataset testing set" ], "extractive_spans": [], "free_form_answer": "Operation accuracy: 71.89\nExecution accuracy: 55.95", "highlighted_evidence": [ "Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results.", "FLOAT SELECTED: Table 1: Results on MathQA dataset testing set" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no" ], "question": [ "How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?", "What is the performance proposed model achieved on AlgoList benchmark?", "What is the performance proposed model achieved on MathQA?" ], "question_id": [ "9c4a4dfa7b0b977173e76e2d2f08fa984af86f0e", "4c7ac51a66c15593082e248451e8f6896e476ffb", "05671d068679be259493df638d27c106e7dd36d0" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Overview diagram of TP-N2F.", "Figure 2: Implementation of TP-N2F encoder.", "Figure 3: Implementation of TP-N2F decoder.", "Table 1: Results on MathQA dataset testing set", "Table 2: Results of AlgoLisp dataset", "Figure 4: K-means clustering results: MathQA with 5 clusters and AlgoLisp with 4 clusters", "Figure 5: MathQA clustering results", "Figure 6: AlgoLisp clustering results" ], "file": [ "3-Figure1-1.png", "5-Figure2-1.png", "6-Figure3-1.png", "7-Table1-1.png", "8-Table2-1.png", "8-Figure4-1.png", "14-Figure5-1.png", "14-Figure6-1.png" ] }
[ "How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?", "What is the performance proposed model achieved on AlgoList benchmark?", "What is the performance proposed model achieved on MathQA?" ]
[ [ "1910.02339-8-Table2-1.png", "1910.02339-EXPERIMENTS ::: Generating program trees from natural-language descriptions-0" ], [ "1910.02339-8-Table2-1.png", "1910.02339-EXPERIMENTS ::: Generating program trees from natural-language descriptions-0" ], [ "1910.02339-EXPERIMENTS ::: Generating operation sequences to solve math problems-0", "1910.02339-7-Table1-1.png" ] ]
[ "Full Testing Set accuracy: 84.02\nCleaned Testing Set accuracy: 93.48", "Full Testing Set Accuracy: 84.02\nCleaned Testing Set Accuracy: 93.48", "Operation accuracy: 71.89\nExecution accuracy: 55.95" ]
439
2003.06044
Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition
Dialogue act recognition is a fundamental task for an intelligent dialogue system. Previous work models the whole dialog to predict dialog acts, which may bring the noise from unrelated sentences. In this work, we design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence information. We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances. Based on the found that the length of dialog affects the performance, we introduce a new dialog segmentation mechanism to analyze the effect of dialog length and context padding length under online and offline settings. The experiment shows that our method achieves promising performance on two datasets: Switchboard Dialogue Act and DailyDialog with the accuracy of 80.34\% and 85.81\% respectively. Visualization of the attention weights shows that our method can learn the context dependency between utterances explicitly.
{ "paragraphs": [ [ "Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.", "Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network.", "However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets.", "The contributions of this paper are:", "We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances.", "We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length.", "In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting." ], [ "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16.", "Recent studies have applied deep learning based model for DA recognition. BIBREF14 proposed a model based on RNNs and CNNs that incorporates preceding short texts to classify current DAs. BIBREF7, BIBREF8 used hierarchical CNN and RNN to model the utterance sequence in the conversation, which can extract high-level sentence information to predict its label. They found that there is a small performance difference among different hierarchical CNN and RNN approaches. BIBREF9 added a CRF layer on the top of the hierarchical network to model the label transition dependency. BIBREF10 applied the context-aware self-attention mechanism coupled with a hierarchical recurrent neural network and got a significant improvement over state-of-the-art results on SwDA datasets. On another aspect, BIBREF17 combined a recurrent neural network language model with a latent variable model over DAs. BIBREF18 proposed a Discrete Information Variational Autoencoders (DI-VAE) model to learn discrete latent actions to incorporate sentence-level distributional semantics for dialogue generation." ], [ "Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling.", "Given the input sequence $ s = \\left( s_1,...,s_n \\right) $ of n elements where $ s_i \\in \\mathbb {R}^{d_s} $. Each attention head holds three parameter matrices, $W_h^Q, W_h^K, W_h^V \\in {\\mathbb {R}}^{d_s \\times d_z} $ where $ h $ present the index of head. For the head $h$, linear projection is applied to the sequence $s$ to obtain key (K), query (Q), and value (V) representations. the attention module gets the weight by computing dot-products between key/query pair and then $softmax$ normalizes the result. it is defined as:", "where $\\sqrt{d_z}$ is the scaling factor to counteract this effect that the dot products may grow large in magnitude. For all the heads,", "where $W^O \\in \\mathbb {R}^{(d_z*h)\\times d_s}$ is the output projection.", "One weakness of self-attention model it that they cannot encode the position information efficiently. Some methods have been proposed to encode the relative or absolute position of tokens in the sequence as the additional input to the model. BIBREF11 used sine and cosine functions of different frequencies and added positional encodings to the input embeddings together. It used absolute position embedding to capture relative positional relation by the characteristic of sine and cosine functions. Moreover, several studies show that explicitly modeling relative position can further improve performance. For example, BIBREF19 proposed relative position encoding to explicitly model relative position by independent semantic parameter. It demonstrated significant improvements even when entirely replacing conventional absolute position encodings. BIBREF12 proposed to model localness for the self-attention network by a learnable Gaussian bias which enhanced the ability to model local relationship and demonstrated the effectiveness on the translation task.", "In our study, we design a local contextual attention model, which incorporates relative position information by a learnable Gaussian bias into original attention distribution. Different from BIBREF12, in our method, the distribution center is regulated around the corresponding utterance with a window, which indicates the context dependency preference, for capturing more local contextual dependency." ], [ "Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$.", "Figure FIGREF6 shows our overall model structure. For the first layer, we encode each utterance $u_j$ into a vector representation. Each word $w_m$ of the utterance $u_j$ is converted into dense vector representations $e_m$ from one-hot token representation. And then, we apply LSTM BIBREF20, a powerful and effective structure for sequence modeling, to encode the word sequence. Formally, for the utterance $u_j$:", "where $embed$ represents the embedding layer which can be initialized by pre-trained embeddings. To make a fair comparison with previous work, we do not use the fine-grained embedding presented in BIBREF21. LSTM helps us get the context-aware sentence representation for the input sequence. There are several approaches to represent the sentence from the words. Following BIBREF22, we add a max-pooling layer after LSTM, which selects the maximum value in each dimension from the hidden units. In our experiment, LSTM with max-pooling does perform a little better than LSTM with last-pooling, which is used in BIBREF9.", "Afterwards, we get the utterances vector representations $ u = (u_1,...,u_{N_l}) $ of $N_l$ elements for the dialogue $D_l$ where $ u_j \\in \\mathbb {R}^{d_s}, d_s$ is the dimension of hidden units. As we discussed in section SECREF7, given the sequence $ s \\in \\mathbb {R}^{N_l*d_s}$, self-attention mechanism calculates the attention weights between each pair of utterances in the sequence and get the weighted sum as output. The attention module explicitly models the context dependency between utterances. We employ a residual connection BIBREF23 around the attention module, which represents the dependency encoder between utterances, and the current utterance encoder $s$:", "Finally, we apply a two-layer fully connected network with a Rectified Linear Unit (ReLU) to get the final classification output for each utterance." ], [ "The attention explicitly models the interaction between the utterances. However, for context modeling, original attention mechanism always considers all of the utterances in a dialogue which inhibits the relation among the local context and is prone to overfitting during training. It is natural that utterances in the local context always have strong dependencies in our daily dialog. Therefore, we add a learnable Gaussian bias with the local constraint to the weight normalized by $softmax$ to enhance the interaction between concerned utterances and its neighbors.", "The attention module formula is revised as:", "The first term is the original dot product self-attention model. $POS \\in \\mathbb {R}^{N\\times N}$ is the bias matrix, where N is the length of dialogue. The element $POS_{i,j}$ is defined following by gaussian distribution:", "$POS_{i,j}$ measures the dependency between the utterance $u_j$ and the utterance $u_i$ in terms of the relative position prior. $w_{i}$ represents for the standard deviation, which controls the weight decaying. Because of local constraint, $|c_{i} - i| <= C$, for each utterance $u_i$, the predicted center position $c_{i}$ and window size $ w_{i}$ is defined as followed:", "where $W_i^c,W_i^d \\in \\mathbb {R}^{1*N}$ are both learnable parameters. We initialized the parameter $W_i^c$ to 0, which leads to center position $ c_i = i $ by default. Furthermore, $c_{i}$ and $w_{i}$ are both related to the semantic context of the utterances, so we assign the mean of key $\\overline{K}$ in attention mechanism to represent the context information. Moreover, the central position also indicates the dependency preference of the preceding utterances or subsequent utterances.", "It is worth noting that there is a little difference with BIBREF12, although we both revise the attention module by the Gaussian distribution. In our method, for the given utterance $u_{i}$, the distribution center $c_{i}$ is regulated for capturing the not only local but also contextual dependency, which can be formally expressed as: $c_{i} \\in (i-C,i+C)$. However, in their work, the distribution center can be anywhere in the sequence, and it is designed for capturing the phrasal patterns, which are essential for Neural Machine Translation task." ], [ "Previous work mainly focuses on the offline setting where we can access the whole utterances in the dialogue and predict all the DA labels simultaneously. However, the online setting is the natural demand in our real-time applications. For the online setting, we only care about the recognition result of the last utterance in the given context, as seen in the area with the red dashed line in Figure FIGREF6, our model is well compatible with online setting, we can calculate the attention between the last utterance and the other utterances directly where $K \\in \\mathbb {R}^{1\\times d}, Q \\in \\mathbb {R}^{n\\times d}, V \\in \\mathbb {R}^{n\\times d}$. For LSTM, we still have to model the entire sequence, which is slower than attention based models. Table TABREF17 shows the time complexity comparison excluding the time cost of first layer encoding, and the dialogue length $n$ is smaller than the representation dimension $d$. Our model is easy to expand into the online setting, however, to have a fair comparison with previous work, in our experiments, we applied the models under the offline setting by default." ], [ "The length of different dialogues in the dataset varies a lot. It is worth noting that the length of dialog affects the model prediction. On the one hand, under the offline setting, we can access the whole utterances in the dialogue and predict all the DA labels simultaneously, so the more utterances, the more efficient. However, on the other hand, if we put too many utterances in once prediction, it will model too much unrelated dependency in the long utterances sequence for both LSTM and attention mechanism based model. The sub-dialogues with the same length also enable efficiently batch training. To study how the dialogue length and context padding length will affect the performance, so we defined a sliding window $W$ which is the sub-dialogue length. Then, we separate each long dialogue into several small sub-dialogues. For example, the dialog $D$ is a sequence of utterances with length $n$, and we will get $\\lceil x/w \\rceil $ sub-dialogues, for the k-th sub-dialogues, the utterances sequence is $(u_{(k-1)*W+1},u_{(k-1)*W+2},...,u_{k*W})$. In order to avoid losing some context information caused by being separated, which will affect the context modeling for the utterances in the begin and end of the sub-dialog, we add the corresponding context with $P$ (stands for context padding) utterances at the begin and the end of each sliding window, so for the k-th sub-dialogues, the revised utterances sequence is $(u_{(k-1)*W-P+1},u_{(k-1)*W-P+2},...,u_{k*W+P})$. Moreover, we mask the loss for the context padding utterances, which can be formally expressed as:", "$M(i)=0$ if utterance $i$ is in the context padding otherwise 1, $L$ is the cross entropy.", "The $W$ and $P$ are both hyperparameters; in the experiment SECREF21, we will talk about the effect of the window size and the context padding length." ], [ "We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.", "[1]The author claimed that they achieved 78.7%(81.3%) accuracy with pre-trained word embedding (fine-grained embedding). For a fair comparison, both previous and our work is simply based on pre-trained word embedding. [2]The author randomly selected two test sets which are different from previous and our work and achieved 77.15% and 79.74%, and we reimplemented in standard test sets." ], [ "In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.", "By modeling context information, the performance of the hierarchical model is improved by at least 3%, even compared to BERT. In order to better analyze the semantic dependency learned by attention, in our experiments, we removed the CRF module. In terms of different hierarchical models, our LSTM+BLSTM achieved good result. The accuracy was 80.00% which is even a little better than Hierarchical BLSTM-CRF BIBREF9. Relying on attention mechanism and local contextual modeling, our model, LSTM+Attention and LSTM+Local Contextual Attention, achieved 80.12% and 80.34% accuracy respectively. Compared with the previous best approach Hierarchical BLSTM-CRF, we can obtain a relative accuracy gain with 1.1% by our best model. It indicated that self-attention model can capture context dependency better than the BLSTM model. With adding the local constraint, we can get an even better result.", "To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.", "For online prediction, we only care about the recognition result of the last utterance in the given context. We added 5 preceding utterances as context padding for every predicted utterance because we cannot access subsequent utterances in the online setting. As seen in Table TABREF22, without subsequent utterances, the performances of these three models dropped. However, LSTM+LC Attention still outperformed the other two models." ], [ "The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances." ], [ "In this section, we visualize the attention weights for analyzing how local contextual attention works in detail. Figure FIGREF24 shows the visualization of original attention and local contextual attention for the example dialogue shown in Table TABREF1. The attention matrix $M$ explicitly measures the dependency among utterances. Each row of grids is normalized by $softmax$, $M_{ij}$ represents for the dependency score between the utterance i and utterance j. As demonstrated in Figure FIGREF24, there are some wrong and uninterpretable attention weights annotated with red color, which is learned by the original attention. The original attention model gives the utterance “B: Hi” (position 0) and “A: Okay.” (position 7) a high dependency score. However, local contextual attention weakens its attention weights due to the long distance apart.", "Overall, the additional Gaussian bias trend to centralize the attention distribution to the diagonal of the matrix, which is in line with our linguistic intuition that utterances that are far apart usually don't have too strong dependencies. As demonstrated in Figure FIGREF24, benefiting of the additional Gaussian bias, the revised attention mechanism weakens the attention weights between utterances which cross the long relative distance. For the grids near diagonal, it strengthens their dependency score and doesn't bring other useless dependencies for its learnable magnitude." ], [ "In the paper, we propose our hierarchical model with local contextual attention to the Dialogue Act Recognition task. Our model can explicitly capture the semantic dependencies between utterances inside the dialogue. To enhance our model with local contextual information, we revise the attention distribution by a learnable Gaussian bias to make it focus on the local neighbors. Based on our dialog segmentation mechanism, we find that local contextual attention reduces the noises through relative position information, which is essential for dialogue act recognition. And this segmentation mechanism can be applied under online and offline settings. Our model achieves promising performance in two well-known datasets, which shows that modeling local contextual information is crucial for dialogue act recognition.", "There is a close relation between dialogue act recognition and discourse parsing BIBREF31. The most discourse parsing process is composed of two stages: structure construction and dependency labeling BIBREF32, BIBREF33. For future work, a promising direction is to apply our method to multi-task training with two stages jointly. Incorporating supervised information from dependency between utterances may enhance the self-attention and further improve the accuracy of dialogue act recognition." ] ], "section_name": [ "Introduction", "Background ::: Related Work", "Background ::: Self-Attention", "Methodology", "Methodology ::: Modeling Local Contextual Attention", "Methodology ::: Online and Offline Predictions", "Methodology ::: Separate into Sub-dialogues", "Experiments ::: Datasets", "Experiments ::: Results on SwDA", "Experiments ::: Result on DailyDialog", "Experiments ::: Visualization", "Conclusions and Future Work" ] }
{ "answers": [ { "annotation_id": [ "33e1ed1490d1fb2416e3c75974d6d437002b91b7" ], "answer": [ { "evidence": [ "We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.", "In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.", "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing.", "FLOAT SELECTED: Table 6: Experiment results on DailyDialog dataset.", "To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.", "The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances." ], "extractive_spans": [ "Table TABREF20 ", "Table TABREF22", "Table TABREF23" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. ", "In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. ", "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing.", "FLOAT SELECTED: Table 6: Experiment results on DailyDialog dataset.", "To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result", "The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "d695f2c4cb3f7ed2e009a92223fb19e9ea9d503c" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing." ], "extractive_spans": [], "free_form_answer": "BLSTM+Attention+BLSTM\nHierarchical BLSTM-CRF\nCRF-ASN\nHierarchical CNN (window 4)\nmLSTM-RNN\nDRLM-Conditional\nLSTM-Softmax\nRCNN\nCNN\nCRF\nLSTM\nBERT", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "FLOAT SELECTED: Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "83731c877d15704d8e2aff65a59e4a5476bc225c" ], "answer": [ { "evidence": [ "Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.", "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16." ], "extractive_spans": [ "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. " ], "free_form_answer": "", "highlighted_evidence": [ "Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance.", "DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "a4120e331463b13f0ba47cc85108f32f8f1c9c7b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How do previous methods perform on the Switchboard Dialogue Act and DailyDialog datasets?", "What previous methods is the proposed method compared against?", "What is dialogue act recognition?", "Which natural language(s) are studied?" ], "question_id": [ "a3a871ca2417b2ada9df1438d282c45e4b4ad668", "0fcac64544842dd06d14151df8c72fc6de5d695c", "4e841138f307839fd2c212e9f02489e27a5f830c", "37103369e5792ece49a71666489016c4cea94cda" ], "question_writer": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: A snippet of a conversation with the DA labels from Switchboard dataset.", "Figure 1: The model structure for DA recognition, where the LSTM with max pooling is simplified as utterance encoder in our experiment. The area in the red dashed line represents the structure for online prediction.", "Table 2: Time complexity between LSTM and selfattention for both online and offline predictions excluding the time cost of first layer encoding. The parameter n represents for the dialogue length in the sliding window and d represent for the dimension of representation unit.", "Table 3: |C| indicates the number of classes. |U | indicates the average length of dialogues. The train/validation/test columns indicate the number of dialogues (the number of sentences) in the respective splits.", "Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset.", "Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing.", "Table 6: Experiment results on DailyDialog dataset.", "Figure 2: Visualization of original attention and local contextual attention. Each colored grid represents the dependency score between two sentences. The deeper the color is, the higher the dependency score is." ], "file": [ "1-Table1-1.png", "3-Figure1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "8-Figure2-1.png" ] }
[ "What previous methods is the proposed method compared against?" ]
[ [ "2003.06044-6-Table4-1.png", "2003.06044-6-Table5-1.png" ] ]
[ "BLSTM+Attention+BLSTM\nHierarchical BLSTM-CRF\nCRF-ASN\nHierarchical CNN (window 4)\nmLSTM-RNN\nDRLM-Conditional\nLSTM-Softmax\nRCNN\nCNN\nCRF\nLSTM\nBERT" ]
440
1902.00821
Review Conversational Reading Comprehension
Seeking information about products and services is an important activity of online consumers before making a purchase decision. Inspired by recent research on conversational reading comprehension (CRC) on formal documents, this paper studies the task of leveraging knowledge from a huge amount of reviews to answer multi-turn questions from consumers or users. Questions spanning multiple turns in a dialogue enables users to ask more specific questions that are hard to ask within a single question as in traditional machine reading comprehension (MRC). In this paper, we first build a dataset and then propose a novel task-adaptation approach to encoding the formulation of CRC task into a pre-trained language model. This task-adaptation approach is unsupervised and can greatly enhance the performance of the end CRC task that has only limited supervision. Experimental results show that the proposed approach is highly effective and has competitive performance as supervised approach. We plan to release the datasets and the code in May 2019.
{ "paragraphs": [ [ "Seeking information to assess whether some products or services suit one's needs is a vital activity for consumer decision making. In online businesses, one major hindrance is that customers have limited access to answers to their specific questions or concerns about products and user experiences. Given the ever-changing environment of products and services, it is very hard, if not impossible, to pre-compile an up-to-date knowledge base to answer user questions as in KB-QA BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . As a compromise, community question-answering (CQA) BIBREF4 is leveraged to enable existing customers or sellers to answer customer questions. However, one obvious drawback of this approach is that many questions are not answered, and even if they are answered, the answers and the following up questions are delayed, which is not suitable for interactive QA. Although existing studies have used information retrieval (IR) techniques BIBREF4 , BIBREF5 to identify a whole review as an answer to a question, it is time-consuming to read a whole review and the approach has difficulty to answer questions in multiple turns.", "Inspired by recent research in Conversational Reading Comprehension (CRC) BIBREF6 , BIBREF7 , we explore the possibility of turning reviews as a source of valuable knowledge of experiences and to provide a natural way of answering customers' multiple-turn questions in a dialogue setting. The conversational setting of machine reading comprehension (MRC) enables more specific questions and allow customers to either omit or co-reference information in context. As an example in a laptop domain shown in Table 1 , a customer may have 5 turns of questions based on the context. The customer first has an opinion question targeting an aspect “retina display” of a to-be-purchased laptop. Then the customer carries (and omit) the question type opinion from the first question to the second and continually asking the second aspect “boot-up speed”. For the third question, the customer carries the aspect of the second question but change the question type to opinion explanation. Later, the customer can co-reference the aspect “SSD” from the previous answer and ask for the capacity (a sub-aspect) of “SSD”. Unfortunately, there is no answer in this review for the fourth question so the review may say “I don't know”. But the customer can still ask other aspects as in the fifth question. We formally define this problem as follows and call it review conversational reading comprehension (RCRC).", "Problem Definition: Given a review that consists of a sequence of $n$ tokens $d=(d_1, \\dots , d_n)$ , a history of past $k-1$ questions and answers as the context $C=(q_1, a_1, q_2, a_2, \\dots , q_{k-1}, a_{k-1})$ and the current question $q_k$ , find a sequence of tokens (a textual span) $a=(d_s, \\dots , d_e)$ in $d$ that answers $q_k$ based on $C$ , where $1 \\le s \\le n$ , $d=(d_1, \\dots , d_n)$0 , and $d=(d_1, \\dots , d_n)$1 , or return NO ANSWER ( $d=(d_1, \\dots , d_n)$2 ) if the review does not contain any answer for $d=(d_1, \\dots , d_n)$3 .", "RCRC is a novel QA task that requires the understanding of both the current question $q_k$ and dialogue context $C$ . Compared to the traditional single-turn MRC, the key challenge is how to understand the context $C$ and the current question $q_k$ given it may have a co-reference resolution or context carryover.", "To the best of our knowledge, there are no existing review datasets for RCRC. We first build a dataset called $(\\text{RC})_2$ based on laptop and restaurant reviews from SemEval 2016 Task 5. We choose this dataset to better align with existing research on review-based tasks in sentiment analysis. Each review is annotated with a few dialogues focusing on some topics. Note that although one dialogue is annotated on a single review, a trained RCRC model can potentially be deployed among an open set of reviews BIBREF8 where the context may potentially contain answers from different reviews. Given the wide spectrum of domains in online business (e.g., thousands of categories on Amazon.com) and the prohibitive cost of annotation, $(\\text{RC})_2$ is designed to have limited supervision as in other tasks of sentiment analysis. We adopt BERT (Bidirectional Encoder Representation from Transformers BIBREF9 ) as our base model since its variants achieve dominant performance on MRC BIBREF10 , BIBREF11 and CRC BIBREF6 tasks. However, BERT is designed to learn features for a wide spectrum of NLP tasks with a large amount of training examples. The task-awareness of BERT can be hindered by the weak supervision of the $(\\text{RC})_2$ dataset. To resolve this challenge, we introduce a novel pre-tuning stage between pre-training and end-task fine-tuning for BERT. The pre-tuning stage is formulated in a similar fashion as the RCRC task but requires no annotated RCRC data and just domain QA pairs (from CQA) and reviews, which are readily available online BIBREF4 . We bring certain characteristics of the RCRC task (inputs/outputs) to pre-tuning to encourage BERT's weight to be prepared for understanding the current question and locate the answer if there exists one. The proposed pre-tuning step is general and can potentially be used in MRC or CRC tasks in other domains.", "The main contributions of this paper are as follows. (1) It proposes a practical new task on reviews that allows multi-turn conversational QA. (2) To address this problem, an annotated dataset is first created. (3) It then proposes a pre-tuning stage to learn task-aware representation. Experimental results show that the proposed approach achieves competitive performance even compared with the supervised approach on a large-scale training data." ], [ "MRC (or CRC) has been studied in many domains with formal written texts, e.g., Wikipedia (WikiReading BIBREF12 , SQuAD BIBREF10 , BIBREF11 , WikiHop BIBREF13 , DRCD BIBREF14 , QuAC BIBREF7 , HotpotQA BIBREF15 ), fictional stories (MCTest BIBREF16 , CBT BIBREF17 , NarrativeQA BIBREF18 ), general Web documents (MS MARCO BIBREF19 , TriviaQA BIBREF20 , SearchQA BIBREF21 ) and news articles (NewsQA BIBREF22 , CNN/Daily Mail BIBREF23 , and RACE BIBREF24 ). Recently, CRC BIBREF6 , BIBREF25 , BIBREF26 gains increasing popularity as it allows natural multi-turn questions. Examples are QuAC BIBREF7 and CoQA BIBREF6 . CoQA is built from multiple sources, such as Wikipedia, Reddit, News, Mid/High School Exams, Literature, etc. To the best of our knowledge, CRC has not been used on reviews, which are primarily subjective. Our $(\\text{RC})_2$ dataset is compatible with the format of CoQA datasets so all CoQA-based models can be easily adapted to our dataset. Answers from $(\\text{RC})_2$ are intended to be extractive (similar to SQuAD BIBREF10 , BIBREF11 ) rather than abstractive (generative) (such as in MS MARCO BIBREF19 and CoQA BIBREF6 ) because we believe online businesses are cost-sensitive so relying on human written answers are more reliable than machine generated answers.", "Traditionally, knowledge bases (KBs) (such as Freebase BIBREF27 , BIBREF3 , BIBREF28 or DBpedia BIBREF29 , BIBREF30 ) have been used for question-answering BIBREF5 . However, the ever-changing environment of online businesses launches new products and services appear constantly, making it prohibitive to build a high-quality KB to cover all new products, services and subjective experiences from customers. Community QA (CQA) is widely adopted by online businesses BIBREF4 to help users get answers for their questions. However, since the answers are written by humans, it often takes a long time to get a question answered or even not answered at all as we discussed in the introduction section. There exist researches that align reviews to questions in CQA as an information retrieval task BIBREF4 , BIBREF5 , but a whole review is hard to read and not suitable for follow-up questions. We novelly use CQA data for CRC (or potentially for MRC), which play a significant role in encouraging domain representation learning on questions and contexts, which are largely ignored in existing research on MRC (or CRC)." ], [ "In this section, we briefly review BERT (Bidirectional Encoder Representation from Transformers BIBREF9 ), which is one of the key innovations of unsupervised contextualized representation learning BIBREF31 , BIBREF32 , BIBREF33 , BIBREF9 . The idea behind these innovations is that although the word embedding BIBREF34 , BIBREF35 layer is trained from large-scale corpora, relying on the limited supervised data from end-tasks to train the contextualized representation is insufficient. Unlike ELMo BIBREF31 and ULMFiT BIBREF32 that are designed to provide additional features for an end task, BERT adopts a fine-tuning approach that requires almost no specific architecture design for end tasks, but parameter intensive models on BERT itself. As such, BERT requires pre-training on large-scale data (Wikipedia articles) to fill intensive parameters in exchange for human structured architecture designs for specific end-tasks that carry human's understanding of data of those tasks.", "One training example of BERT is formulated as $(\\texttt {[CLS]}, x_{1:j}, \\texttt {[SEP]}, x_{j+1:n}, \\texttt {[SEP]})$ , where [CLS] and [SEP] are special tokens and $x_{1:n}$ is a document splited into two sides of sentences $x_{1:j}$ and $x_{j+1:n}$ . The key performance gain of BERT comes from two novel pre-training objectives: masked language model (MLM) and next text sentence prediction.", "Masked Language Model enables learning bidirectional language models and essentially encourages a BERT model to predict randomly masked words given their contexts. This is crucial for RCRC. For example, an example can be “its amazingly [MASK] when it boots up because of the [MASK] storage”. These two [MASK]'s encourage BERT to guess that the first mark could be “fast”and the second mask could be“SSD” so as to learn some common knowledge on aspects of laptops and their potential opinions.", "Next Sentence Prediction further encourages BERT to learn inter-sentence representations by predicting whether two sides around the first $\\texttt {[SEP]}$ are from the same document or not. We remove this objective in our pre-tuning as the text format is different from BERT pre-training (discussed in the next Section).", "In summary, we can see that the pre-trained BERT severely lacks RCRC task-awareness as there is no formulation for either context $C$ , the current turn question $q_{k}$ or possible answer spans as Wikipedia contains almost no questions or domain knowledge about online businesses. We resolve these issues in the next section." ], [ "To address the limitation of BERT on task-awareness, we introduce an intermediate stage of pre-tuning between BERT pre-training and fine-tuning on RCRC. This works in a similar spirit to the invention of BERT (or any other pre-trained language models) because it is also insufficient to learn the end task definition (or setting) solely on the limited supervised data (of that task). The task-awareness is determined by the inputs and outputs of RCRC, which introduce two directions for pre-tuning: (1) understanding the text inputs, including both domains and text formats (e.g., contexts, current questions). (2) understanding the goal of RCRC, including both having a text span or no answer. As such, we first define the textual format that is shared by both the RCRC and BERT pre-tuning in Section \"Conclusions\" . Then we introduce an auxiliary pre-tuning objective in Section \"Auxilary Objective\" ." ], [ "Inspired by the recent implementation of DrQA for CoQA BIBREF6 and BERT for SQuAD, we formuate an input example $x$ for pre-tuning (or RCRC) from the context $C$ , the current question $q_k$ , and the review $d$ as follows: ", "$$\\begin{split}\nx=(\\texttt {[CLS]} \\texttt {[Q]} q_1 \\texttt {[A]} a_1 \\dots \\texttt {[Q]} q_{k-1} \\texttt {[A]} a_{k-1} \\\\\n\\texttt {[Q]} q_{k} \\texttt {[SEP]} d_{1:n} \\texttt {[SEP]}),\n\\nonumber \\end{split}$$ (Eq. 7) ", "where past QA pairs $q_1, a_1, \\dots , q_{k-1}, a_{k-1}$ in $C$ are concatenated and separated by two special tokens [Q] and [A] and then concatenate with the current question $q_k$ as the left side of BERT and the right side is the review document. This format will be used for both pre-tuning and RCRC task fine-tuning. Note that the answer for a question with no answer in the context is written as a single word “unknown”. One can observe that although this format is simple and intuitive for humans to read, BERT's pre-trained weights have no idea the semantics behind this format (e.g., where is the current question, how many turns in the context and where is the previous turn), let alone the special tokens [Q] and [A] never appear during BERT pre-training." ], [ "Based on the format defined in Section \"Conclusions\" , we can observe that getting BERT to be familiar with domain reviews is as easy as continually training BERT on reviews. However, enabling BERT to understand the context $C$ and the current question $q_k$ is more challenging as the pre-training data of BERT has almost no question. To resolve this issue, we combine QA pairs (from CQA data) and reviews to formulate the pre-tuning examples, as shown in Algorithm \"Pre-tuning Data Generation\" . Note that these two kinds of data are often readily available across a wide range of products in Amazon.com and Yelp.com.", "[t] Data Generation Algorithm InputInput OutputOutput $\\mathcal {T}$ : pre-tuning data.", " $\\mathcal {T} \\leftarrow \\lbrace \\rbrace $ $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$ $x \\leftarrow \\texttt {[CLS]} $ $h \\leftarrow \\text{RandInteger}([0, h_{\\text{max}}]) $ $1 \\rightarrow h$ $q^{\\prime \\prime }, a^{\\prime \\prime } \\leftarrow \\text{RandSelect}(\\mathcal {Q}\\backslash (q^{\\prime }, a^{\\prime }))$ $ x \\leftarrow x \\oplus \\texttt {[Q]} \\oplus q^{\\prime \\prime } \\oplus \\texttt {[A]} \\oplus a^{\\prime \\prime } $ $ x \\leftarrow x \\oplus \\texttt {[Q]} \\oplus q^{\\prime } \\texttt {[SEP]} $ $ r_{1:s} \\leftarrow \\text{RandSelect}(\\mathcal {R}) $ $\\text{RandFloat}([0.0, 1.0]) > 0.5$ $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$0 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$1 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$2 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$3 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$4 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$5 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$6 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$7 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$8 $(q^{\\prime }, a^{\\prime }) \\in \\mathcal {Q}$9 ", "To ensure the topic of a pre-tuning example is consistent between QAs and reviews, we assume QA pairs and reviews are organized under each entity (a laptop or a restaurant in our experiment) that customers focus on. The inputs to Algorithm \"Pre-tuning Data Generation\" are a set of QA pairs and reviews belonging to the same entity and the maximum turns in the context that is the same as the RCRC datasets. The output is the pre-tuning data as initialized in Line 1, where each example is denoted as $(x, (s, e))$ . Here $x$ is the input example and $(s, e)$ is the two pointers for the auxiliary objective (discussed in Section \"Auxilary Objective\" ). Given a QA pair $(q^{\\prime }, a^{\\prime })$ in Line 2, we first build the left side of input example $x$ in Line 3-9. After initializing input $x$ in Line 3, we randomly determine the number of turns as context in Line 4 and concatenate $\\oplus $ these turns of QA pairs in Line 5-8, where $\\mathcal {Q}\\backslash (q^{\\prime }, a^{\\prime })$ ensures the current QA pair $(q^{\\prime }, a^{\\prime })$ is not chosen. In Line 9, we concatenate the current question $q^{\\prime }$ . Lines 10-23 build the right side of input example $x$0 and the outputs pointers In Line 10, we randomly draw a review with $x$1 sentences. To challenge the pre-tuning stage to discover the semantic relatedness between $x$2 and $x$3 (as for the auxiliary objective), we first decide whether to allow the right side of $x$4 contains $x$5 (Line 16) for $x$6 or a fake random answer $x$7 Lines 11-12. We also come up with two pointers $x$8 and $x$9 initialized in Lines 13 and 17. Then, we insert $(s, e)$0 into review $(s, e)$1 by randomly pick one from the $(s, e)$2 locations in Lines 19-20. This gives us $(s, e)$3 , which has $(s, e)$4 tokens. We further update $(s, e)$5 and $(s, e)$6 to allow them to point to the chunk boundaries of $(s, e)$7 . Otherwise, BERT should detect as no $(s, e)$8 on the right side and point to [CLS] ( $(s, e)$9 ). Finally, examples are aggregated in Line 25.", "Algorithm \"Pre-tuning Data Generation\" is run $p=10$ times to allow for enough samplings of data. As we can see, although labeled training examples for RCRC are expensive to obtain, harvest a large amount of pre-tuning data is easy. Following the success of BERT, we still randomly mask some words in each example $x$ to learn contextualized representations on domain texts." ], [ "Besides adapting input $x$ to domains and RCRC task, it is also desirable to allow pre-tuning to adapt BERT to the goal of RCRC tasks, which is to predict a token span or NO ANSWER. Besides MLM from BERT, we further introduce an auxiliary objective called answer chunk detection to align BERT to a similar fashion as RCRC, except that we only predict the token spans of an answer chunk from CQA. Further, these tasks challenge BERT to be prepared for predicting NO ANSWER from a review by detecting a negative randomly drawn answer.", "Let $\\text{BERT}(\\cdot )$ be the BERT's transformer model. We first obtain the hidden representation of BERT as $h=\\text{BERT}(x)$ . Then the hidden representation is passed to two separate dense layers followed by softmax functions: $l_1=\\text{softmax}(W_1 \\cdot h + b_1)$ and $l_2=\\text{softmax}(W_2 \\cdot h + b_2)$ , where $W_1$ , $W_2 \\in \\mathbb {R}^{r_h}$ and $b_1, b_2 \\in \\mathbb {R}$ and $r_h$ is the size of the hidden dimension (e.g., 768 for $\\text{BERT}_{\\text{BASE}}$ ). Training involves minimizing the averaged cross entropy on the two pointers $s$ and $h=\\text{BERT}(x)$0 generated in Algorithm \"Pre-tuning Data Generation\" : $h=\\text{BERT}(x)$1 ", "where $\\mathbb {I}(s)$ and $\\mathbb {I}(e)$ are one-hot vectors representing the two starting and ending positions. For a positive example (with true answer $a^{\\prime }$ randomly inserted in the review), $s$ and $e$ are expected to be $\\text{Idx}_{\\texttt {[SEP]}} < s\\le |x|$ and $s\\le e\\le |x|$ , respectively, where $\\text{Idx}_{\\texttt {[SEP]}}$ is the position of the first [SEP]. For a negative example (with a random answer (not $a^{\\prime }$ ) mixed into the review), $s,e=1$ indicates the two pointers must point to [CLS].", "After pre-tuning, we fine-tune on the RCRC task in a similar fashion to the auxiliary objective, except this time there is no need to perform MLM." ], [ "We aim to answer the following research questions (RQs) in the experiment:", "RQ1: What is the performance of using BERT compared against CoQA baselines ?", "RQ2: Upon ablation studies of different applications of BERT, what is the performance gain of pre-tuning ?", "RQ3: What is the performance of pre-tuning compared to using (large-scale) supervised data?" ], [ "To be consistent with existing research on review-based tasks such as sentiment analysis, we adopt SemEval 2016 Task 5 as the review source for RCRC, which contains two domains laptop and restaurant. Then we collect reviews and QA pairs for these two domains. For the laptop domain, we collect the reviews from BIBREF36 and QA pairs from BIBREF37 both under the laptop category of Amazon.com. We exclude products in the test data of $(\\text{RC})_2$ to make sure the test data is not used for on any model parameters. This gives us 113,728 laptop reviews and 18,589 QA pairs. For the restaurant domain, we collect reviews from Yelp dataset challenges but crawl QA pairs from Yelp.com . We select restaurants with at least 100 reviews as other restaurants seldom have any QA pairs. This ends with 753,096 restaurant reviews and 15,457 QA pairs.", "To compare with a supervised pre-tuning approach, we further leverage the CoQA dataset BIBREF6 . It comes with 7,199 documents (passages) and 108,647 QA pairs of supervised training data with domains in Children’s Story. Literature Mid/High School, News, and Wikipedia." ], [ "To the best of our knowledge, there are no existing datasets for RCRC. We keep the split of training and testing of the SemEval 2016 Task 5 datasets and annotate dialogues of QAs on each review. To ensure our questions are real-world questions, annotators are first asked to read CQAs of the pre-tuning data. Each dialogue is annotated to focus on certain topics of a review. The textual spans are kept to be as short as possible but still human-readable. No-answer questions are also annotated, which have certain topical connections with the nearby questions or answers. Annotators are encouraged to label about 2 dialogues from a testing review to get enough testing examples. One training review is encouraged to have 1 dialogue to have good coverage of reviews. Each question is shortened as much as possible to omit existing information in the past turns. The annotated data is in the format of CoQA BIBREF6 to help future research. The statistics of $(\\text{RC})_2$ dataset is shown in Table 2 . We split 20% of the training reviews as the validation set for each domain." ], [ "We compare the following methods by training/fine-tuning on $(\\text{RC})_2$ . All the baselines are run using their default hyper-parameters.", "DrQA is a CRC baseline coming with the CoQA dataset. Note that this implementation of DrQA is different from DrQA for SQuAD BIBREF8 in that it is modified to support answering no answer questions by having a special token unknown at the end of the document. So having a span with unknown indicates NO ANSWER. This baseline answers the research question RQ1.", "DrQA+CoQA is the above baseline pre-tuned on CoQA dataset and then fine-tuned on $(\\text{RC})_2$ . We use this baseline to show that even DrQA pre-trained on CoQA is sub-optimal for RCRC. This baseline is used to answer RQ1 and RQ3.", "BERT is the vanilla BERT model directly fine-tuned on $(\\text{RC})_2$ . We use this baseline for ablation study on the effectiveness of pre-tuning. All these BERT's variants are used to answer RQ2.", "BERT+review first tunes BERT on domain reviews using the same objectives as BERT pre-training and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that a simple domain-adaptation of BERT is not good.", "BERT+CoQA first fine-tunes BERT on the supervised CoQA data and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that pre-tuning is very competitive even compared with models trained from large-scale supervised data. This also answers RQ3.", "BERT+Pre-tuning first pre-tunes BERT as proposed and then fine-tunes on $(\\text{RC})_2$ ." ], [ "We choose BERT base model as our pre-tuning and fine-tuning model, which has 12 layers, 768 hidden dimensions and 12 attention heads (in transformer) with total parameters of 110M. We cannot use the BERT large model as we cannot fit it into our GPU memory for training. We set the maximum length to be 256 with a batch size of 16. We perform pre-tuning for 10k steps as further increasing the pre-tuning steps doesn't yield better results. We fine-tune 6 epochs, though most runs converged just within 3 epochs due to the pre-trained/tuned weights of BERT. Results are reported as averages of 3 runs of fine-tuning (3 different random seeds for tuning batch generation).", "To be consistent with existing research, we leverage the same evaluation script from CoQA. Similar to the evaluation of SQuAD 2.0, CoQA script reports turn-level Exact Match (EM) and F1 scores for all turns in all dialogues. EM requires the answers to have exact string match with human annotated answer spans. F1 score is the averaged F1 scores of individual answers, which is typically higher than EM and is the major metric." ], [ "As shown in Table 3 , BERT+Pre-tuning has significant performance gains over many baselines. To answer RQ1, we can see that BERT is better than DrQA baseline from CoQA. To answer RQ2, we notice that by leveraging BERT+Pre-tuning, we have about 9% performance gain. Note that directly using review documents to continually pre-training BERT does not yield better results for BERT+review. We suspect the task of RCRC still requires certain degrees of general language understanding and BERT+review has the effect of (catastrophic) forgetting BIBREF38 the strength of BERT. To answer RQ3, we notice that large-scale supervised CoQA data can boost the performance for both DrQA and BERT. However, our pre-tuning stage still has competitive performance and it requires no annotation at all." ], [ "In this paper, we propose a novel task called review conversational reading comprehension (RCRC). We investigate the possibility of interactive question answering by using reviews as knowledge of user experiences. We first build a dataset called $(\\text{RC})_2$ , which is derived from popular review datasets for sentiment analysis. To resolve the issues of limited supervision introduced by the prohibitive cost of annotation, we introduce a novel pre-tuning stage to perform task-adaptation from a language model. This pre-tuning stage can potentially be used for any MRC or CRC task given it has no requirement on annotation but large QA and review corpora available online. Experimental results show that the pre-tuning approach is highly effective and outperforms existing baselines or highly competitive with supervised baselines trained from a large-scale dataset." ] ], "section_name": [ "Introduction", "Related Works", "Preliminary", "Task-awareness Pre-tuning", "Textual Format", "Pre-tuning Data Generation", "Auxilary Objective", "Experiments", "Pre-tuning datasets", "(RC) 2 (\\text{RC})_2 Datasets", "Compared Methods", "Hyper-parameters and Evaluation Metrics", "Result Analysis", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "33f70e75e30422a4c73dd1e8e8d1609c5038174e" ], "answer": [ { "evidence": [ "DrQA is a CRC baseline coming with the CoQA dataset. Note that this implementation of DrQA is different from DrQA for SQuAD BIBREF8 in that it is modified to support answering no answer questions by having a special token unknown at the end of the document. So having a span with unknown indicates NO ANSWER. This baseline answers the research question RQ1.", "DrQA+CoQA is the above baseline pre-tuned on CoQA dataset and then fine-tuned on $(\\text{RC})_2$ . We use this baseline to show that even DrQA pre-trained on CoQA is sub-optimal for RCRC. This baseline is used to answer RQ1 and RQ3.", "BERT is the vanilla BERT model directly fine-tuned on $(\\text{RC})_2$ . We use this baseline for ablation study on the effectiveness of pre-tuning. All these BERT's variants are used to answer RQ2.", "BERT+review first tunes BERT on domain reviews using the same objectives as BERT pre-training and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that a simple domain-adaptation of BERT is not good.", "BERT+CoQA first fine-tunes BERT on the supervised CoQA data and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that pre-tuning is very competitive even compared with models trained from large-scale supervised data. This also answers RQ3." ], "extractive_spans": [], "free_form_answer": "The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data", "highlighted_evidence": [ "DrQA is a CRC baseline coming with the CoQA dataset. Note that this implementation of DrQA is different from DrQA for SQuAD BIBREF8 in that it is modified to support answering no answer questions by having a special token unknown at the end of the document. ", "DrQA+CoQA is the above baseline pre-tuned on CoQA dataset and then fine-tuned on $(\\text{RC})_2$ . ", "BERT is the vanilla BERT model directly fine-tuned on $(\\text{RC})_2$ . We use this baseline for ablation study on the effectiveness of pre-tuning.", "BERT+review first tunes BERT on domain reviews using the same objectives as BERT pre-training and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that a simple domain-adaptation of BERT is not good.", "BERT+CoQA first fine-tunes BERT on the supervised CoQA data and then fine-tunes on $(\\text{RC})_2$ . We use this baseline to show that pre-tuning is very competitive even compared with models trained from large-scale supervised data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "infinity" ], "paper_read": [ "no" ], "question": [ "What is the baseline model used?" ], "question_id": [ "3213529b6405339dfd0c1d2a0f15719cdff0fa93" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "reading comprehension" ], "topic_background": [ "research" ] }
{ "caption": [ "Table 1: An Example of conversational review reading comprehension (best viewed in colors): we show dialogue with 5-turn questions that a customer asks and the review with textual span answers.", "Table 2: Statistics of (RC)2 Datasets.", "Table 3: Results of RCRC on EM (Exact Match) and F1." ], "file": [ "1-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png" ] }
[ "What is the baseline model used?" ]
[ [ "1902.00821-Compared Methods-2", "1902.00821-Compared Methods-1", "1902.00821-Compared Methods-3", "1902.00821-Compared Methods-4", "1902.00821-Compared Methods-5" ] ]
[ "The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data" ]
442
2002.01359
Schema-Guided Dialogue State Tracking Task at DSTC8
This paper gives an overview of the Schema-Guided Dialogue State Tracking task of the 8th Dialogue System Technology Challenge. The goal of this task is to develop dialogue state tracking models suitable for large-scale virtual assistants, with a focus on data-efficient joint modeling across domains and zero-shot generalization to new APIs. This task provided a new dataset consisting of over 16000 dialogues in the training set spanning 16 domains to highlight these challenges, and a baseline model capable of zero-shot generalization to new APIs. Twenty-five teams participated, developing a range of neural network models, exceeding the performance of the baseline model by a very high margin. The submissions incorporated a variety of pre-trained encoders and data augmentation techniques. This paper describes the task definition, dataset and evaluation methodology. We also summarize the approach and results of the submitted systems to highlight the overall trends in the state-of-the-art.
{ "paragraphs": [ [ "Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.", "However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.", "The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.", "In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants." ], [ "Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.", "Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.", "Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.\", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application." ], [ "The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.", "The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.", "Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema." ], [ "Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.", "The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models." ], [ "As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services." ], [ "The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.", "In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema." ], [ "To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.", "Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.", "In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain." ], [ "Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.", "The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.", "All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.", "The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples." ], [ "The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.", "Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.", "Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.", "Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.", "Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.", "Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.", "Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.", "Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.", "Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.", "Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn." ], [ "We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.", "Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.", "Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.", "Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.", "Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.", "In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:", "[leftmargin=*]", "Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.", "Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.", "Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation." ], [ "The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:", "For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.", "The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.", "Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.", "The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively)." ], [ "In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.", "The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works." ], [ "The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance." ] ], "section_name": [ "Introduction", "Related Work", "Task", "Task ::: Schema-Guided Approach", "Dataset", "Dataset ::: Data Representation", "Dataset ::: Comparison With Other Datasets", "Dataset ::: Data Collection And Dataset Analysis", "Submissions", "Evaluation", "Results", "Summary", "Summary ::: Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "e8bbaf26e0f73c3e2b8da0e0e3c348cb1c028643" ], "answer": [ { "evidence": [ "Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains." ], "extractive_spans": [ "dialogue simulator" ], "free_form_answer": "", "highlighted_evidence": [ "Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "6f5fd31b33c80179de1a6382053682e4ce5b5cf6" ], "answer": [ { "evidence": [ "Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy." ], "extractive_spans": [ "back translation between English and Chinese" ], "free_form_answer": "", "highlighted_evidence": [ " They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "annotation_id": [ "bc74fc95e9e5fef1069cb300dea35873615d253e" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "e2e0273fb809e6a971d349d1a10cd6707727d515" ], "answer": [ { "evidence": [ "We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.", "Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.", "Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.", "Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.", "Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly." ], "extractive_spans": [ "Active Intent Accuracy", "Requested Slot F1", "Average Goal Accuracy", "Joint Goal Accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.\n\nActive Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.\n\nRequested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.\n\nAverage Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.\n\nJoint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "annotation_id": [ "340c6a043ae3c6204f3785e07f64e71101e142b4" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "3e1f1be67d042ad4cab92ddede6905b3af4d1b72" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: The total number of intents (services in parentheses) and dialogues for each domain across train1, dev2 and test3 sets. Superscript indicates the datasets in which dialogues from the domain are present. Multi-domain dialogues contribute to counts of each domain. The domain Services includes salons, dentists, doctors, etc." ], "extractive_spans": [], "free_form_answer": "Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The total number of intents (services in parentheses) and dialogues for each domain across train1, dev2 and test3 sets. Superscript indicates the datasets in which dialogues from the domain are present. Multi-domain dialogues contribute to counts of each domain. The domain Services includes salons, dentists, doctors, etc." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] } ], "nlp_background": [ "two", "two", "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "question": [ "Where is the dataset from?", "What data augmentation techniques are used?", "Do all teams use neural networks for their models?", "How are the models evaluated?", "What is the baseline model?", "What domains are present in the data?" ], "question_id": [ "5f0bb32d70ee8e4c4c59dc5c193bc0735fd751cc", "a88a454ac1a1230263166fd824e5daebb91cb05a", "bbaf7cbae88c085faa6bbe3319e4943362fe1ad4", "a6b99b7f32fb79a7db996fef76e9d83def05c64b", "d47c074012eae27426cd700f841fd8bf490dcc7b", "b43fa27270eeba3e80ff2a03754628b5459875d6" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Example schema for a digital wallet service.", "Figure 2: Dialogue state tracking labels after each user utterance in a dialogue in the context of two different flight services. Under the schema-guided approach, the annotations are conditioned on the schema (extreme left/right) of the underlying service.", "Figure 3: Detailed statistics of the SGD dataset.", "Table 1: Comparison of our SGD dataset to existing related datasets for task-oriented dialogue. Note that the numbers reported are for the training portions for all datasets except FRAMES, where the numbers for the complete dataset are reported.", "Table 2: The total number of intents (services in parentheses) and dialogues for each domain across train1, dev2 and test3 sets. Superscript indicates the datasets in which dialogues from the domain are present. Multi-domain dialogues contribute to counts of each domain. The domain Services includes salons, dentists, doctors, etc.", "Table 3: The best submission from each team, ordered by the joint goal accuracy on the test set. Teams marked with * submitted their papers to the workshop. We could not identify the teams for three of the submitted papers." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png" ] }
[ "What domains are present in the data?" ]
[ [ "2002.01359-5-Table2-1.png" ] ]
[ "Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather" ]
444
1612.05270
A Simple Approach to Multilingual Polarity Classification in Twitter
Recently, sentiment analysis has received a lot of attention due to the interest in mining opinions of social media users. Sentiment analysis consists in determining the polarity of a given text, i.e., its degree of positiveness or negativeness. Traditionally, Sentiment Analysis algorithms have been tailored to a specific language given the complexity of having a number of lexical variations and errors introduced by the people generating content. In this contribution, our aim is to provide a simple to implement and easy to use multilingual framework, that can serve as a baseline for sentiment analysis contests, and as starting point to build new sentiment analysis systems. We compare our approach in eight different languages, three of them have important international contests, namely, SemEval (English), TASS (Spanish), and SENTIPOLC (Italian). Within the competitions our approach reaches from medium to high positions in the rankings; whereas in the remaining languages our approach outperforms the reported results.
{ "paragraphs": [ [ "Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).", "This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors.", "In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.", "The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes." ], [ "We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.", "In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics.", "To obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated." ], [ "We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier.", "Generally, Twitter messages are full of slang, misspelling, typographical and grammatical errors among others; in order to tackle these aspects we consider different parameters to study this effect. The following points are the parameters to be considered as spelling features. Punctuation (del-punc) considers the use of symbols such as question mark, period, exclamation point, commas, among other spelling marks. Diacritic symbols (del-diac) are commonly used in languages such as Spanish, Italian, Russian, etc., and its wrong usage is one of the main sources of orthographic errors in informal texts; this parameter considers the use or absence of diacritical marks. Symbol reduction (del-d1), usually, twitter messages use repeated characters to emphasize parts of the word to attract user's attention. This aspect makes the vocabulary explodes. We applied the strategy of replacing the repeated symbols by one occurrence of the symbol. Case sensitivity (lc) considers letters to be normalized in lowercase or to keep the original source; the aim is to cut the words that are the same in uppercase and lowercase.", "We classified around 500 most popular emoticons, included text emoticons, and the whole set of unicode emoticons (around INLINEFORM0 ) defined by BIBREF8 into three classes: positive, negative and neutral, which are grouped under its corresponding polarity word defined by the class name.", "Table TABREF6 shows an excerpt of the dictionary that maps emoticons to their corresponding polarity class.", "N-words (word sequences) are widely used in many NLP tasks, and they have also been used in Sentiment Analysis BIBREF9 and BIBREF10 . To compute the N-words, the text is tokenized and N-words are calculated from tokens. For example, let INLINEFORM0 be the text, so its 1-words (unigrams) are each word alone, and its 2-words (bigrams) set are the sequences of two words, the set ( INLINEFORM1 ), and so on. INLINEFORM2 = {the lights, lights and, and shadows, shadows of, of your, your future}, so, given text of size INLINEFORM3 words, we obtain a set containing at most INLINEFORM4 elements. Generally, N-words are used up to 2 or 3-words because it is uncommon to find, between texts, good matches of word sequences greater than three or four words BIBREF11 .", "In addition to the traditional N-words representation, we represent the resulting text as q-grams. A q-grams is an agnostic language transformation that consists in representing a document by all its substring of length INLINEFORM0 . For example, let INLINEFORM1 be the text, its 3-grams set are INLINEFORM2 ", "so, given text of size INLINEFORM0 characters, we obtain a set with at most INLINEFORM1 elements. Notice that this transformation handles white-spaces as part of the text. Since there will be q-grams connecting words, in some sense, applying q-grams to the entire text can capture part of the syntactic and contextual information in the sentence. The rationale of q-grams is also to tackle misspelled sentences from the approximate pattern matching perspective BIBREF12 ." ], [ "The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 .", "In many languages, there is a set of extremely common words such as determiners or conjunctions ( INLINEFORM0 or INLINEFORM1 ) which help to build sentences but do not carry any meaning for themselves. These words are known as Stopwords, and they are removed from text before any attempt to classify them. Generally, a stopword list is built using the most frequent terms from a huge document collection. We used the Spanish, English and Italian stopword lists included in the NLTK Python package BIBREF6 in order to identify them.", "Stemming is a well-known heuristic process in Information Retrieval field that chops off the end of words and often includes the removal of derivational affixes. This technique uses the morphology of the language coded in a set of rules that are applied to find out word stems and reduce the vocabulary collapsing derivationally related words. In our study, we use the Snowball Stemmer for Spanish and Italian, and the Porter Stemmer for English that are implemented in NLTK package BIBREF6 .", "Negation markers might change the polarity of the message. Thus, we attached the negation clue to the nearest word, similar to the approaches used in BIBREF9 . A set of rules was designed for common negation structures that involve negation markers for Spanish, English and Italian. For instance, negation markers used for Spanish are no (not), nunca, jamás (never), and sin (without). The rules (regular expressions) are processed in order, and their purpose is to negate the nearest word to the negation marker using only the information on the text, e.g., avoiding mainly pronouns and articles. For example, in the sentence El coche no es bonito (The car is not nice), the negation marker no and not (for English) is attached to its adjective no_bonito (not_nice)." ], [ "After text-transformations, it is needed to represent the text in suitable form in order to use a traditional classifier such as SVM. It was decided to select the well known vector representation of a text given its simplicity and powerful representation. Particularly, it is used the Term Frequency-Inverse Document Frequency which is a well-known weighting scheme in NLP. TF-IDF computes a weight that represents the importance of words or terms inside a document in a collection of documents, i.e., how frequently they appear across multiple documents. Therefore, common words such as the and in, which appear in many documents, will have a low score, and words that appear frequently in a single document will have high score. This weighting scheme selects the terms that represent a document." ], [ "The model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms.", "The first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 .", "To guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others." ], [ "Nowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 .", "Table TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 )." ], [ "We tested our framework on two kinds of datasets. On one hand, we compare our performance on three languages having well known sentiment analysis contests; here, we compare our work against competitors of those challenges. On the other hand, we selected five languages without popular opinion mining contests; for these languages, we compare our approach against research works reporting the used corpus." ], [ "Figure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques.", "Figures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader.", "The winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information.", "In TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors.", "The remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ).", "In SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques.", "Table TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge.", "Table TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation.", "In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases." ], [ "We presented a simple to implement multilingual framework for polarity classification whose main contributions are in two aspects. On one hand, our approach can serve as a baseline to compare other classification systems. It considers techniques for text representation such as spelling features, emoticons, word-based n-grams, character-based q-grams and language dependent features. On the other hand, our approach is a framework for practitioners or researchers looking for a bootstrapping sentiment classifier method in order to build more elaborated systems.", "Besides the text-transformations, the proposed framework uses a SVM classifier (with linear kernel), and, hyper-parameter optimization using random search and H+M over the space of text-transformations. The experimental results show good overall performance in all international contests considered, and the best results in the other five languages tested.", "It is important to note that all the methods that outperformed B4MSA in the sentiment analysis contests use extra knowledge (lexicons included) meanwhile B4MSA uses only the information provided by each contests. In future work, we will extend our methodology to include extra-knowledge in order to improve the performance." ], [ "We would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research." ] ], "section_name": [ "Introduction", "Our Approach: Multilingual Polarity Classification", "Cross-language Features", "Language Dependent Features", "Text Representation", "Parameter Optimization", "Datasets and contests", "Experimental Results", "Performance on sentiment analysis contests", "Conclusions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "39b20c00019c285ecaad375b7430c5805dea3c1c" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work" ], "extractive_spans": [], "free_form_answer": "Total number of annotated data:\nSemeval'15: 10712\nSemeval'16: 28632\nTass'15: 69000\nSentipol'14: 6428", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "341ad09b6c265d77f84eb5725125e20a6956cfff" ], "answer": [ { "evidence": [ "In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.", "FLOAT SELECTED: Table 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters." ], "extractive_spans": [], "free_form_answer": "Arabic, German, Portuguese, Russian, Swedish", "highlighted_evidence": [ "Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language.", "FLOAT SELECTED: Table 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "eab76d0ba706f68c3ba237db0e1d565b7004d80a" ], "answer": [ { "evidence": [ "In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques." ], "extractive_spans": [ "Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish" ], "free_form_answer": "", "highlighted_evidence": [ "In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "eb77add728d948190603d5191bd6a6b506eef6e4" ], "answer": [ { "evidence": [ "In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics." ], "extractive_spans": [ "text-transformations to the messages", "vector space model", "Support Vector Machine" ], "free_form_answer": "", "highlighted_evidence": [ "In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?", "In which languages did the approach outperform the reported results?", "What eight language are reported on?", "What are the components of the multilingual framework?" ], "question_id": [ "458dbf217218fcab9153e33045aac08a2c8a38c6", "cebf3e07057339047326cb2f8863ee633a62f49f", "ef8099e2bc0ac4abc4f8216740e80f2fa22f41f6", "1e68a1232ab09b6bff506e442acc8ad742972102" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "twitter", "twitter", "twitter", "twitter" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Parameter list and a brief description of the functionality", "Table 3: Datasets details from each competition tested in this work", "Figure 1: The performance listing in four difference challenges. The horizontal lines appearing in a) to d) correspond to B4MSA’s performance. All scores were computed using the official gold-standard and the proper score for each challenge.", "Table 4: B4MSA’s performance on cross-validation and gold standard. The subscript at right of each score means for the random-search’s parameter (sampling size) needed to find that value.", "Table 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters." ], "file": [ "3-Table1-1.png", "6-Table3-1.png", "7-Figure1-1.png", "8-Table4-1.png", "8-Table5-1.png" ] }
[ "How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?", "In which languages did the approach outperform the reported results?" ]
[ [ "1612.05270-6-Table3-1.png" ], [ "1612.05270-Performance on sentiment analysis contests-8", "1612.05270-8-Table5-1.png" ] ]
[ "Total number of annotated data:\nSemeval'15: 10712\nSemeval'16: 28632\nTass'15: 69000\nSentipol'14: 6428", "Arabic, German, Portuguese, Russian, Swedish" ]
445
1705.03151
Phonetic Temporal Neural Model for Language Identification
Deep neural models, particularly the LSTM-RNN model, have shown great potential for language identification (LID). However, the use of phonetic information has been largely overlooked by most existing neural LID methods, although this information has been used very successfully in conventional phonetic LID systems. We present a phonetic temporal neural model for LID, which is an LSTM-RNN LID system that accepts phonetic features produced by a phone-discriminative DNN as the input, rather than raw acoustic features. This new model is similar to traditional phonetic LID methods, but the phonetic knowledge here is much richer: it is at the frame level and involves compacted information of all phones. Our experiments conducted on the Babel database and the AP16-OLR database demonstrate that the temporal phonetic neural approach is very effective, and significantly outperforms existing acoustic neural models. It also outperforms the conventional i-vector approach on short utterances and in noisy conditions.
{ "paragraphs": [ [ "Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been developed, based on different types of cues." ], [ "There are more than 5000 languages in the world, and each language has distinct properties at different levels, from acoustic to semantics BIBREF0 , BIBREF1 , BIBREF2 . A number of studies have investigated how humans use these properties as cues to distinguish between languages BIBREF3 . For example, Muthusamy BIBREF4 found that familiarity with a language is an important factor affecting LID accuracy, and that longer speech samples are easier to identify. Moreover, people can easily tell what cues they use for identification, including phonemic inventory, word usage, and prosody. More thorough investigations were conducted by others by modifying speech samples to promote one or several factors. For example, Mori et al. BIBREF5 found that people are able to identify Japanese and English fairly reliably even when phone information is reduced. They argued that other non-linguistic cues such as intensity and pitch were used to decide the language. Navratil BIBREF6 evaluated the importance of various types of knowledge, including lexical, phonotactic and prosodic, by asking humans to identify five languages, Chinese, English, French, German and Japanese. Subjects were presented with unaltered speech samples, samples with randomly altered syllables, and samples with the vocal-tract information removed to leave only the F0 and amplitude. Navratil found that the speech samples with random syllables are more difficult to identify compared to the original samples (73.9% vs 96%), and removing vocal-tract information leads to significant performance reduction (73.9% vs 49.4%). This means that with this 5-language LID task, the lexical and phonotactic information is important for human decision making.", "The LID experiments summarised above suggest that languages can be discriminated by multiple cues at different levels, and the cues used to differentiate different language pairs are different. In general, the cues can be categorized into three levels: feature level, token level and prosody level. At the feature level, different languages have their own implementation of phones, and the transitions between phones are also different. This acoustic speciality is a short-time property and can be identified by certain spectral analysis and feature extraction of our auditory system. At the token level, the distribution and transition patterns of linguistic tokens at various levels are significantly different. The tokens can be phones/phonemes, syllables, words or even syntactic or semantic tags. At the prosody level, the duration, pitch and stress patterns often differ between languages. For example, patterns of stress can provide an important cue for discriminating between two stressed languages, duration can also be potentially useful, and the tone patterns of syllables or words offer a clear cue to discriminate between tonal languages." ], [ "Based on the different types of cues, multiple LID approaches have been proposed. Early work generally focused on feature-level cues. Feature-based methods use strong statistical models built on raw acoustic features to make the LID decision. For instance, Cimarusti used LPC features BIBREF7 , and Foil et al. BIBREF8 investigated formant features. Dynamic features that involve temporal information were also demonstrated to be effective BIBREF9 . The statistical models used include Gaussian mixture models (GMMs) BIBREF10 , BIBREF11 , hidden Markov models (HMMs) BIBREF12 , BIBREF13 , neural networks (NNs) BIBREF14 , BIBREF15 , and support vector machines (SVMs) BIBREF16 . More recently, a low-rank GMM model known as the i-vector model was proposed and achieved significant success BIBREF17 , BIBREF18 . This model constrains the mean vectors of the GMM components in a low-dimensional space to improve the statistical strength for model training, and uses a task-oriented discriminative model (e.g., linear discriminative analysis, LDA) to improve the decision quality at run-time, leading to improved LID performance. Due to the short-time property of the features, most feature-based methods model the distributional characters rather than the temporal characters of speech signals.", "The token-based approach is based on the characters of high-level tokens. Since the dynamic properties of adjacent tokens are more stable than adjacent raw features, temporal characters can be learned with the token-based approach, in additional to the distributional characters. A typical approach is to convert speech signals into phone sequences, and then build an n-gram language model (LM) for each target language to evaluate the confidence that the input speech matches that language. This is the famous phone recognition and language modelling (PRLM) approach. Multiple PRLM variants have been proposed, such as parallel phone recognition followed by LM (PPRLM) BIBREF19 , BIBREF20 , and phone recognition on a multilingual phone set BIBREF21 . Other tokens such as syllables BIBREF22 and words BIBREF23 , BIBREF24 have also been investigated.", "The prosody-based approach utilizes patterns of duration, pitch, and stress to discriminate between languages. For example, Foil et al. BIBREF8 studied formant and prosodic features and found formant features to be more discriminative. Rouas et al. BIBREF25 modeled pure prosodic features by GMMs and found that their system worked well on read speech, but could not deal with the complexity of spontaneous speech prosody. Muthusamy BIBREF15 used pitch variation, duration and syllable rate. Duration and pitch patterns were also used by Hazen BIBREF21 . In most cases, the prosodic information is used as additional knowledge to improve feature or token-based LID.", "Most of the above methods, no matter what information is used, heavily rely on probabilistic models to accumulate evidence from a long speech segment. For example, the PRLM method requires an n-gram probability of the phonetic sequence, and the GMM/i-vector method requires the distribution of the acoustic feature. Therefore, these approaches require long test utterances, leading to inevitable latency in the LID decision. This latency is a serious problem for many practical applications, e.g., code-switching ASR, where multiple languages may be contained within a single block of speech. For quick LID, frame-level decision is highly desirable, which therefore cannot rely on probabilistic models.", "The recently emerging deep learning approach solves this problem by using various deep neural networks (DNNs) to produce frame-level LID decisions. An early successful deep neural model was developed by Lopez-Moreno et al. BIBREF26 , who proposed an approach based on a feed-forward deep neural network (FFDNN), which accepts raw acoustic features and produces frame-level LID decisions. The score for utterance-based decision is calculated by averaging the scores of the frame-level decisions. This was extended by others with the use of various neural model structures, e.g., CNN BIBREF27 , BIBREF28 and TDNN BIBREF29 , BIBREF30 . These DNN models are feature-based, but they consider a large context window, and can therefore learn the feature's temporal information, which is not possible with conventional feature-based models (such as the i-vector model), that only learn distributional information. The temporal information can be better learned by recurrent neural networks (RNNs), as proposed by Gonzalez-Dominguez et al. BIBREF31 . Using an RNN structure based on the long-short term memory unit (LSTM), the authors reported better performance with fewer parameters. This RNN approach was further developed by others, e.g., BIBREF32 , BIBREF33 .", "It should be noted that DNNs have been used in other ways in LID. For example, Song et al. BIBREF34 used a DNN to extract phonetic feature for the i-vector system, and Ferrer et al. BIBREF35 proposed a DNN i-vector approach that uses posteriors produced by a phone-discriminative FFDNN to compute the Baum-Welch statistics. Tian et al. BIBREF36 extended this by using an RNN to produce the posteriors. These methods all use neural models as part of the system, but their basic framework is still probabilistic, so they share the same problem of decision latency. In this paper, we focus on the pure neural approach that uses neural models as the basic framework, so that short-time language information can be learned by frame-level discriminative training." ], [ "All the present neural LID methods are based on acoustic features, e.g., Mel filter banks (Fbanks) or Mel frequency cepstral coefficients (MFCCs), with phonetic information largely overlooked. This may have significantly hindered the performance of neural LID. Intuitively, it is a long-standing hypothesis that languages can be discriminated between by phonetic properties, either distributional or temporal; additionally, phonetic features represent information at a higher level than acoustic features, and so are more invariant with respect to noise and channels. Pragmatically, it has been demonstrated that phonetic information, either in the form of phone sequences, phone posteriors, or phonetic bottleneck features, can significantly improve LID accuracy in both the conventional PRLM approach BIBREF11 and the more modern i-vector system BIBREF34 , BIBREF35 , BIBREF36 . In this paper, we will investigate the utilization of phonetic information to improve neural LID. The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. This property was historically widely and successfully applied in token-based approaches, e.g., PRLM BIBREF11 , but has been largely overlooked due to the popularity of the i-vector approach.", "Table 1 summarizes different systems that use deep neural models in LID. The probabilistic approach uses DNNs as part of a probabilistic system, e.g., GMM or i-vector, while the neural approach uses various types of DNNs as the decision architecture. Both approaches may use either acoustic features or phonetic features. The proposed PTN approach is at the bottom-right of the table." ], [ "The remainder of the paper is organized as follows: the model structures of the PTN approach will be presented in Section \"Phonetic neural modelling for LID\" , which is followed by the implementation details in Section \"Model structure\" . The experiments and results are reported in Section \"Experiments\" , and some conclusions and future work will be presented in Section \"Conclusions\" ." ], [ "In this section, we present the models that employ phonetic information for RNN LID. Although the phonetically aware approach treats phonetic information as auxiliary knowledge, the PTN approach uses phonetic information as the only input into the RNN LID system. Both are depicted in Fig. 1 ." ], [ "The instinctive idea for utilizing phonetic information in the RNN LID system is to treat it as auxiliary knowledge, which we call a phonetically aware approach. Intuitively, this can be regarded as a knowledge-fusion method that uses both the phonetic and acoustic features to learn LID models. Fig. 1 (a) shows this model. A phonetic DNN model (this may be in any structure, such as FFDNN, RNN, TDNN) is used to produce frame-level phonetic features. These can be read from anywhere in the phonetic DNN, such as the output, or the last hidden layer, and then be propagated to the LID model, an LSTM-RNN in our study. This propagated phonetic information can be accepted by the LID model in different ways. For example, it can be part of the input, or as an additional term of the gate or non-linear activation functions." ], [ "The second model, which we call the PTN model, completely replaces the acoustic feature with the phonetic feature, and thus entirely relies on the properties of the phonetic representation. This learning is based on the RNN model, therefore the temporal patterns of the phonetic features can be learned. This PTN system is shown in Fig. 1 (b). Although the PTN model is a special, `aggressive' case of the phonetically aware approach, the success of this model offers a deeper insight into the LID task as it rediscovers the importance of the temporal properties of phonetic representations." ], [ "The rationality of the PTN approach can be understood from two perspectives: the phonetic perspective, which relates to what information is important, and the transfer learning perspective, which relates to how this information is learned.", "Phonetic perspective: The PTN approach adopts the long-standing hypothesis (as used by the PRLM model) that languages should be discriminated by phonetic rather than spectral properties. However this has been largely overlooked since the success of the i-vector approach, which achieved good performance using only raw acoustic features. However, Song et al. BIBREF34 recently rediscovered the value of phonetic features in the i-vector model. The PTN approach proposed here follows the same idea and rediscovers the value of phonetic features in the neural model. We argue that this value is more important for the neural model than for the probabilistic model (e.g., i-vector), as its decision is based on only a small number of frames, and thus requires that the feature involves more language-related information and less noise and uncertainties. The i-vector model, in contrast, can utilize more speech signals, hence can discover language-related information from the distributional patterns even with raw acoustic features.", "Both the PTN approach and the historical token-based approach share the same idea of utilizing phonetic information and modelling the temporal patterns, but they are fundamentally different. Firstly, the phonetic information in the PTN approach is frame-level, while in conventional token-based methods this information is unit-level. Therefore, the PTN approach can represent phonetic properties at a higher temporal resolution. Secondly, conventional token-based methods represent phonetic information as sequences derived from phone recognition, while the PTN approach represents phonetic information as a feature vector that involves information contributed by all phones, and thus more detailed phonetic information is represented. Finally, the back-end model of the conventional token-based approach is an n-gram LM based on discrete tokens and trained with the maximum likelihood (ML) criterion, while the back-end model of the PTN approach is an RNN, which functions similarly to an RNN LM, but is based on continuous phonetic features, and trained with a task-oriented criterion that discriminates the target languages.", "Transfer learning perspective: The second perspective to understand the PTN approach is from the transfer learning perspective BIBREF37 . It is well known that DNNs perform very well at learning task-oriented features from raw data. This is the hypothesis behind conventional acoustic RNN LID methods: if the neural model is successfully trained, it can learn any useful information from the raw acoustic features layer by layer, including the phonetic information. It therefore initially seems unnecessary to design our PTN phonetic feature learning and modelling architecture. However, we argue that using the language labels alone to learn LID-related information from raw acoustic features is highly ineffective, because these labels are too coarse to provide sufficient supervision. With the PTN model, feature extraction is trained on speech data labelled with phones or words which are highly informative and fine-grained (compared to language labels), leading to a strong DNN model for phonetic feature extraction. Importantly, phone discrimination and language identification are naturally correlated (from our phonetic perspective), which means that the phonetic features learned with the strong phone/word supervision involves rich information suitable for LID. This is an example of transfer learning, where a related task (i.e., phone discrimination) is used to learn features for another task (LID).", "The PTN approach also involves another two transfer learning schemes: cross language and cross condition (databases). This means that the phonetic DNN can be learned with any speech data in any language. This property was identified in token-based LID BIBREF19 , however it is more important for the phonetic neural models, as training the phonetic DNN requires a large amount of speech data which is often not available for the target languages and the operating conditions under test. Moreover, it is also possible to train the phonetic DNN with multilingual, multi-conditional data BIBREF38 , resulting in robust and reliable phonetic feature extraction.", "In summary, the PTN approach utilizes a detailed phonetic representation (DNN phonetic feature), and a powerful temporal model (LSTM-RNN) to capture the phonetic temporal properties of a language with a high temporal resolution. It also utilizes three types of transfer learning to ensure that the phonetic feature is representative and robust. Our PTN approach is therefore very powerful and flexible, and reconfirms the belief of many LID researchers that phonetic temporal information is highly valuable in language discrimination, not only for humans but also for machines." ], [ "This section presents the details of the phonetic neural LID models, including both the phonetically aware model and the PTN model. The phonetic DNN can be implemented in various DNN structures, and here we choose the TDNN BIBREF39 which can learn long-term phonetic patterns and performed well in our experiments.", "For the LID neural model, we choose the LSTM-RNN. One reason for this choice is that LSTM-RNN has been demonstrated to perform well in both the pure neural LID approach BIBREF31 and the neural-probabilistic hybrid LID approach BIBREF36 . Another reason is that the RNN model can learn the temporal properties of speech signals, which is in accordance with our motivation to model the phonetic dynamics, as in the conventional PRLM approach BIBREF20 . We first describe the LSTM-RNN structure used for LID, and then present the model structures of the phonetically aware acoustic RNN model and PTN model." ], [ "The LSTM-RNN model used in this study is a one-layer RNN model, where the hidden units are LSTM. The structure proposed by Sak et al. BIBREF40 is used, as shown in Fig. 2 .", "The associated computation is given as follows: ", "$$i_t &=& \\sigma (W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i) \\nonumber \\\\\nf_t &=& \\sigma (W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f) \\nonumber \\\\\nc_t &=& f_t \\odot c_{t-1} + i_t \\odot g(W_{cx}x_t + W_{cr}r_{t-1} + b_c) \\nonumber \\\\\no_t &=& \\sigma (W_{ox}x_t + W_{or}r_{t-1} + W_{oc}c_t + b_o) \\nonumber \\\\\nm_t &=& o_t \\odot h(c_t) \\nonumber \\\\\nr_t &=& W_{rm} m_t \\nonumber \\\\\np_t &=& W_{pm} m_t \\nonumber \\\\\ny_t &=& W_{yr}r_t + W_{yp}p_t + b_y \\nonumber $$ (Eq. 13) ", "In the above equations, the $W$ terms denote weight matrices, and those associated with the cells were constrained to be diagonal in our implementation. The $b$ terms denote bias vectors. $x_t$ and $y_t$ are the input and output symbols respectively; $i_t$ , $f_t$ , $o_t$ represent the input, forget and output gates, respectively; $c_t$ is the cell and $m_t$ is the cell output. $r_t$ and $b$0 are two output components derived from $b$1 , where $b$2 is recurrent and fed to the next time step, while $b$3 is not recurrent and contributes to the present output only. $b$4 is the logistic sigmoid function, and $b$5 and $b$6 are non-linear activation functions, chosen to be hyperbolic. $b$7 denotes element-wise multiplication.", "In this study, the LSTM layer consists of $1,024$ cells, and the dimensionality of both the recurrent and non-recurrent projections is set to 256. The natural stochastic gradient descent (NSGD) algorithm BIBREF41 was employed to train the model. During the training and decoding, the cells were reset for each 20 frames to ensure only short-time patterns are learned." ], [ "In the phonetically aware model, the phonetic feature is read from the phonetic DNN and is propagated to the LID RNN as additional information to assist the acoustic neural LID. The phonetic feature can be read either from the output (phone posterior) or the last hidden layer (logits), and can be propagated to different components of the RNN LID model, e.g., the input/forget/output gates and/or the non-linear activation functions.", "Fig. 3 (a) illustrates a simple configuration, where the phonetic DNN is a TDNN model, and the feature is read from the last hidden layer. The phonetic feature is propagated to the non-linear function $g(\\cdot )$ . With this configuration, calculation of the LID RNN is similar, except that the cell value should be updated as follows: $\nc_t = f_t \\odot c_{t-1} + i_t \\odot g(W_{cx}x_t + W_{cr}r_{t-1} + \\underline{W^{\\prime }_{c\\phi }\\phi _{t}} + b_c)\n$ ", "where $\\phi _t$ is the phonetic feature obtained from the phonetic DNN." ], [ "The phonetically aware acoustic RNN model is an acoustic-based approach, with the phonetic feature used as auxiliary information. In contrast, the PTN approach assumes that the phonetic temporal properties cover most of the information for language discrimination, so the acoustic feature is not important any more. Therefore, it removes all acoustic features and uses the phonetic features as the only input of the LID RNN, as shown in Fig. 3 (b).", "It is interesting to compare the PTN approach with other LID approaches. Firstly, it can be regarded as a new version of the conventional PRLM approach, particularly the recent PRLM implementation using RNN as the LM BIBREF42 . The major difference is that the PTN approach uses frame-level phonetic features while the PRLM approach uses token-level phonetic sequences; in addition, the phonetic information in the PTN approach is much richer than for PRLM, as it is represented as a continuous phonetic vector rather than discrete phonetic symbols.", "The PTN approach is also correlated to the neural-probabilistic hybrid approach, where the phonetic DNN is used to produce phonetic features, from which the GMM or i-vector model is constructed. The PTN approach uses the same phonetic features, but employs an RNN model to describe the dynamic property of the feature, instead of modelling the distributional property using GMM or i-vector models. As will be discussed in the next section, temporal modelling is very important for phonetic neural models.", "Finally, compared to the conventional acoustic RNN LID model, the PTN model uses phonetic features rather than acoustic features. Since the phonetic features can be learned with a very large speech database, they are much more robust against noise and uncertainties (e.g., speaker traits and channel distortions) than the raw acoustic features. This suggests that the PTN approach is more robust against noise than the conventional acoustic RNN approach." ], [ "The experiments were conducted on two databases: the Babel database and the AP16-OLR database. The Babel database was collected as part of the IARPA (Intelligence Advanced Research Projects Activity) Babel program, which aimed to develop speech technologies for low-resource languages. The sampling rate is 8 kHz and the sample size is 16 bits. In this paper, we chose speech data from seven languages in the Babel database: Assamese, Bengali, Cantonese, Georgian, Pashto Tagalog and Turkish. For each language, an official training and development dataset were provided. The training datasets contain both conversational and scripted speech, and the development datasets only contain conversational speech. We used the entire training set of each language for model training, but randomly selected $2,000$ utterances from the development set of each language to perform testing.", "The training data sets from the seven languages are as follows: Assamese 75 hours, Bengali 87 hours, Cantonese 175 hours, Georgian 64 hours, Pashto 111 hours, Tagalog 116 hours and Turkish 107 hours. The average duration of the test utterances is $4.15$ seconds, ranging from $0.19$ seconds to $30.85$ seconds.", "The AP16-OL7 database was originally created by Speechocean Inc., targeted towards various speech processing tasks (mainly speech recognition), and was used as the official data for the AP16-OLR LID challenge. The database contains seven datasets, each in a particular language. These are: Mandarin, Cantonese, Indonesian, Japanese, Russian, Korean and Vietnamese. The data volume for each language is approximately 10 hours of speech signals recorded by 24 speakers (12 males and 12 females), with each speaker recording approximately 300 utterances in reading style by mobile phones, with a sampling rate of 16kHz and a sample size of 16 bits. Each dataset was split into a training set consisting of 18 speakers, and a test set consisting of 6 speakers. For Mandarin, Cantonese, Vietnamese and Indonesian, the recording was conducted in a quiet environment. For Russian, Korean and Japanese, there are 2 recording conditions for each speaker, quiet and noisy. The average duration (including silence) of all the $12,939$ test utterances of the seven languages is $4.74$ seconds, ranging from $1.08$ seconds to $18.06$ seconds.", "The phonetic DNN is a TDNN structure, and the LID model is based on the LSTM-RNN. The raw feature used for those models consists of 23-dimensional Fbanks, with a symmetric 2-frame window for RNN and a symmetric 4-frame window for TDNN to splice neighboring frames. All the experiments were conducted with Kaldi BIBREF43 . The default configurations of the Kaldi WSJ s5 nnet3 recipe were used to train the phonetic DNN and the LID RNN. We first report experiments based on the Babel database, and then experiments with the AP16-OLR database." ], [ "As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).", "For the i-vector baseline, the UBM involves $2,048$ Gaussian components and the dimensionality of the i-vectors is 400. The static acoustic features consists of 12-dimensional MFCCs and the log energy. These static features are augmented by their first and second order derivatives, resulting in 39-dimensional feature vectors. In our experiment, we train an SVM for each language to determine the score of a test i-vector belonging to that language. The SVMs are trained on the i-vectors of all training segments, following the one-versus-rest strategy.", "The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones. More precisely, the output units of the AG-RNN-MLT are separated into two groups: an LID group that involves two units corresponding to Assamese and Georgian respectively, and an ASR group that involves $3,349$ bilingual senones that are inherited from an HMM/GMM ASR system trained with the speech data of Assamese and Georgian, following the standard WSJ s5 HMM/GMM recipe of Kaldi. The WSJ s5 nnet3 recipe of Kaldi is then used to train the AG-RNN-LID and AG-RNN-MLT systems.", "The LID task can be conducted by either AG-RNN-LID or AG-RNN-MLT (using the LID output group) at the frame-level (denoted as `Fr.'), using the frame-level language posteriors they produce. To evaluate the utterance-level (denoted as `Utt.') performance, the frame-level posteriors are averaged to form the utterance-level posterior, by which the language decision can be made.", "The performance results with the three baseline systems, in terms of $C_{avg}$ and equal error rate (EER), are shown in Table 2 . The results indicate that both the LID RNN and the multi-task LID RNN are capable of language discrimination, and the multi-task RNN significantly outperforms both the LID RNN and the i-vector baseline. This indicates that the phone information is very useful for neural LID, even if simply used as an auxiliary objective in the model training, hence supporting our transfer learning perspective, as described in Section \"Phonetic neural modelling for LID\" .", "The multi-task learning approach is an interesting way to involve phonetic information in LID. However, it has the limitation of requiring the training data to be labelled in both languages and words/phones. This is very costly and not feasible in most scenarios. The phonetic neural models (the phonetically aware model and the PTN model) do not suffer from this problem." ], [ "The phonetically aware architecture uses phonetic features as auxiliary information to improve the RNN LID. We experimented with various architectures for the phonetic DNN, and found that the TDNN structure is a good choice. In this experiment, the TDNN structure is composed of 6 time-delay layers, with each followed by a p-norm layer that reduces the dimensionality of the activation from $2,048$ to 256, the same dimension as the recurrent layer of the LID LSTM-RNN. The activations of the last hidden layer in the TDNN are read out as the phonetic feature.", "Two TDNN models are trained. The AG-TDNN-MLT model is a multi-task model trained with the Assamese and Georgian data, and there are two groups of output targets, phone labels and language labels. The ASR performance (WER) of the AG-TDNN-MLT model is $66.4\\%$ and $64.2\\%$ for Assamese and Georgian respectively. The SWB-TDNN-ASR model is an ASR model trained with the Switchboard database. This database involves 317 hours of telephone speech signals in English, recorded from $4,870$ speakers. The ASR performance (WER) of SWB-TDNN-ASR is $20.8\\%$ on the Eval2000 dataset.", "Another design decision that had to be made was to choose which component in the LID RNN will receive the phonetic information. After a series of preliminary experiments, it was found that the $g$ function is the best receiver. With this choice and the two TDNN phonetic DNNs, we therefore build the phonetically aware LID system. The results are shown in Table 3 . Several conclusions can be obtained from the results.", "The phonetically aware system significantly outperforms the baseline RNN LID system (second row of the results in Table 2 ). This suggests that involving phonetic information with RNN LID has clear benefits.", "The phonetically aware system significantly outperforms the multi-task RNN LID (third row of the results in Table 2 ). Note that in the multi-task RNN LID, the phonetic knowledge is used as an auxiliary task to assist the LID RNN training and has shown great benefits. The advantages of the phonetically aware system demonstrated that using the phonetic knowledge to produce phonetic features seems to be a better method than using the knowledge to directly assist model training.", "The phonetic DNN trained with Assamese and Georgian data (AG-TDNN-MLT) shows better performance than the one trained with the Switchboard dataset (SWB-TDNN-ASR). This is not surprising as Assamese and Georgian are the two languages chosen to discriminate between in the experiments presented in this section, so AG-TDNN-MLT is more consistent with this LID task. Nevertheless, it is still highly interesting to observe that clear benefits can be obtained by using phonetic features produced by SWB-TDNN-ASR, which is trained with a completely irrelevant dataset, in terms of both languages and environmental conditions. This confirmed our transfer learning perspective theory (as discussed previously), and demonstrated that phonetic features are largely portable and the phonetic DNN can be trained with any data in any languages. This observation is particularly interesting for LID tasks on low-resource languages, as the phonetic DNN can be trained with data from any rich-resource languages." ], [ "In the above experiments, the phonetic feature is used as auxiliary information. Here, we evaluate the PTN architecture where the phonetic feature entirely replaces the acoustic features (Fbanks). The experiment is conducted with two phonetic DNN models: AG-TDNN-MLT and SWB-TDNN-ASR.", "The results are presented in Table 4 . We first observe that the PTN systems perform as well as the best phonetically aware system in Table 3 , and even better in terms of the utterance-level EER. For better comparison, we also test the special case of the phonetically aware RNN LID (Ph. Aware), where both the phonetic and acoustic features are used as the LID RNN input (Ph+Fb). This is the same as the PTN model, but involves additional acoustic features. The results are shown in the second group of Table 4 . It can be seen that this feature combination does not provide any notable improvement to the results. This means that the phonetic feature is sufficient to represent the distinctiveness of each language, in accordance with our argument that language characters are mostly phonetic.", "We also attempted to use the TDNN as the LID model (replacing the RNN) to learn static (rather than temporal) patterns of the phonetic features. We found that this model failed to converge. The same phenomenon was also observed in the AP16-OLR experiment (which will be discussed later in the paper). This is an important observation and it suggests that, with the phonetic feature, only the temporal properties are informative for language discrimination." ], [ "The good performance using only the phonetic features (i.e. the PTN approach) leads to the question of how this performance advantage in comparison to the RNN LID baseline is obtained. This paper has discussed the phonetic and transfer learning perspectives, which jointly state that the main advantage of PTN is the phonetic knowledge learned through transfer learning. However, another possible reason is that the deeper architecture consisting of both the phonetic DNN and the LID RNN may help to learn more abstract features. If the latter reason is more important, than a similar deep structure with only the LID labels can work similarly well. To answer this question, we design the following three experiments to test the contributions to the results from phonetic information (transfer learning) and deep architecture (deep learning):", "TDNN-LSTM. The phonetic DNN, TDNN in the experiment, is initialized randomly and trained together with the LID RNN. This means that the TDNN is not trained with ASR labels, but as part of the LID neural model, and is trained end-to-end.", "Pre-trained TDNN-LSTM. The same as TDNN-LSTM, except that the TDNN is initialized by AG-TDNN-MLT.", "3-layer LSTM-RNN. The 1-layer LSTM-RNN LID model may be not strong enough to learn useful information from acoustic features, hence leading to the suboptimal performance in Table 2 . We experiment with a 3-layer LSTM-RNN LID system to test if a simple deeper network can obtain the same performance as with the phonetic feature.", "The results of these three deep models are shown in Table 5 . The TDNN-LSTM model completely fails. Using the phonetic TDNN as the initialization helps the training, but the results are worse than directly using the phonetic model. This means that the phonetic feature is almost optimal, and does not require any further LID-oriented end-to-end training. Finally, involving more LSTM layers (3-layer LSTM-RNN) does improve the performance a little when compared to the one-layer LSTM baseline ( $7.70$ vs $9.20$ , ref. to Table 2 ). These results indicate that the improvement with the PTN architecture is mainly due to the phonetic information it has learned from the ASR-oriented training (sometimes by multi-task learning), rather than the deep network structure. In other words, it is the transfer learning instead of deep learning that improves LID performance with the PTN architecture." ], [ "We evaluate various LID models on the seven languages of the Babel database. First, the i-vector and LSTM-RNN LID baselines are presented. For the i-vector system, linear discriminative analysis (LDA) is employed to promote language-related information before training SVMs. The dimensionality of the LDA projection space is set to 6. For the phonetically aware RNN and the PTN systems, two phonetic DNNs are evaluated, AG-TDNN-MLT and SWB-TDNN-ASR. For the phonetically aware system, the $g$ function of the LSTM-RNN LID model is chosen as the receiver. The results are shown in Table 6 . It can be seen that both the phonetically aware and the PTN systems outperform the i-vector baseline and the acoustic RNN LID baseline, and that the PTN system with the AG-TDNN-MLT phonetic DNN performs the best. The SWB-TDNN-ASR performs slightly worse than AG-TDNN-MLT, indicating that familiarity with the language and the environment is beneficial when discriminating between languages. However, phonetic DNNs trained with data in foreign languages and in mismatched environment conditions (e.g., SWB-TDNN-ASR) still work well." ], [ "In this section, we test the phonetic RNN LID approach on the AP16-OLR database. Compared to the Babel database, the speech signals in AP16-OLR are broadband (sampling rate of 16k Hz), and the acoustic environment is less noisy. Additionally, the speech data of each language is much more limited (10 hours per language), so we assume that training a phonetic DNN model is not feasible with the data of the target languages. We therefore utilize transfer learning, i.e., using phonetic DNNs trained on data in other languages.", "All the test conditions are the same as in the 7 language Babel experiment. We trained two phonetic DNNs: one is a TDNN model of the same size as the AG-TDNN-ASR model in Section \"Babel: phonetically aware bilingual LID\" , but trained on the WSJ database, denoted by `WSJ-TDNN-ASR'. The other is also a TDNN, but is taken from an industry project, trained on a speech database involving $10,000$ hours of Chinese speech signals with 40 dimensional Fbanks. The network contains 7 rectifier TDNN layers, each containing $1,200$ hidden units. This model is denoted by `CH-TDNN-ASR'. The weight matrix of the last hidden layer in CH-TDNN-ASR is decomposed by SVD, where the low rank is set to 400. The 400-dimensional activations are read from the low-rank layer and are used as the phonetic feature.", "The test results on the seven languages in the database are shown in Table 7 . It can be seen that the phonetic RNN LID models, either the phonetically aware RNN or the PTN approach, significantly outperform the acoustic RNN baseline system. The PTN system seems much more effective, which differs from the Babel database results. This may be attributed to the limited training data, so the simpler PTN architecture is preferred. Comparing the WSJ-based phonetic DNN and the Chinese phonetic DNN, the Chinese model is better. This may be attributed to several reasons: (1) the Chinese database contains a larger volume of training data; (2) Chinese is one of the seven languages in AP16-OLR; (3) Chinese is more similar to the remaining 6 target languages in comparison to English, as most of the languages in AP16-OLR are oriental languages.", "Another observation is that the i-vector system outperforms the phonetic RNN systems in the AP16-OLR experiment, which is inconsistent with the observations in the Babel experiment, where both the phonetic systems, significantly outperform the i-vector system. This discrepancy can be attributed to the different data profiles of the two databases, with two possible key factors: (1) the utterances of AP16-OLR are longer than Babel, making the i-vector system more effective; (2) the speech signals of AP16-OLR are cleaner than those of Babel. The RNN system is more robust against noise, and this advantage is less prominent with clean data. We will examine the two conjectures in the following experiments." ], [ "To show the relative advantage of the RNN and the i-vector systems on utterances of different length, we select the utterances of at least 5 seconds from the AP16-OLR test set, and create 10 test sets by dividing them into small utterances of different durations, from $0.5$ seconds to 5 seconds, in steps of $0.5$ seconds. Each group contains $5,907$ utterances, and each utterance in a group is a random segment excerpted from the original utterance.", "The performance of the i-vector and PTN systems on the 10 test sets are shown in Fig. 4 , in terms of $C_{avg}$ and EER respectively. It is clear that the PTN system is more effective on short utterances, and if the utterance duration is more than 3 seconds, the i-vector system is the best performer, especially in terms of EER.", "The duration distribution of the test utterances of the Babel database and the AP16-OLR database are shown in Fig. 5 . It is clear that the test utterances are generally longer in AP16-OLR than in Babel. This explains why the relative performance of the i-vector system and the RNN system is inconsistent between the two databases." ], [ "Finally, we test the hypothesis that the RNN system is more robust against noise. Firstly white noise is added to the AP16-OLR test set at different SNR levels, and the noise-augmented data are tested on two systems: the i-vector baseline and the best performing PTN system from Table 7 , i.e. with CH-TDNN-ASR as the phonetic DNN. The results of these two systems with different levels of white noise are shown in Table 8 . It can be seen that the PTN system is more noise-robust: with more noise corruption, the gap between the i-vector system and the PTN system becomes less significant, and the PTN system is better than the i-vector system in terms of $C_{avg}$ when the noise level is high (SNR=10). This can be observed more clearly in Fig. 6 , where the performance degradation rates compared to the noise-free condition are shown. The figure shows that when the noise increases, the performance degradation with the PTN system is less significant compared to the degradation with the i-vector system. As the Babel speech data is much more noisy than the AP16-OLR speech, this noise robustness with the PTN approach partly explains why the relative performance is inconsistent between the two databases." ], [ "This paper proposed a phonetic temporal neural (PTN) approach for language identification. In this approach, acoustic features are substituted for phonetic features to build an RNN LID model. Our experiments conducted on the Babel and AP16-OLR databases demonstrated that the PTN approach can provide dramatic performance improvement over the baseline RNN LID system, with even better results than a phonetically aware approach that treats the phonetic feature as additional auxiliary information. This demonstrated that phonetic temporal information is much more informative than raw acoustic information for discriminating between languages. This was a long-standing belief of LID researchers in the PRLM era, but has been doubted since the increased popularity and utilization of the i-vector approach in recent years. Future work will improve the performance of the neural LID approach on long sentences, by enabling the LSTM-RNN to learn long-time patterns, e.g., by multi-scale RNNs BIBREF44 ." ] ], "section_name": [ "Introduction", "Cues for language identification", "LID approaches", "Motivation of the paper", "Paper organization", "Phonetic neural modelling for LID", "Phonetically aware acoustic neural model", "Phonetic temporal neural model", "Understanding the PTN approach", "Model structure", "LSTM-RNN LID", "Phonetically aware neural LID", "Phonetic temporal neural (PTN) LID", "Databases and configurations", "Babel: baseline of bilingual LID", "Babel: phonetically aware bilingual LID", "Babel: PTN for bilingual LID", "Babel: Phonetic knowledge or deep structure?", "Babel: PTN on seven languages", "AP16-OLR: PTN on seven languages ", "AP16-OLR: utterance duration effect", "AP16-OLR: noise robustness", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "ff81191e38ade51b8a5470f1d5d6b6c495aa0a98" ], "answer": [ { "evidence": [ "As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).", "The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones. More precisely, the output units of the AG-RNN-MLT are separated into two groups: an LID group that involves two units corresponding to Assamese and Georgian respectively, and an ASR group that involves $3,349$ bilingual senones that are inherited from an HMM/GMM ASR system trained with the speech data of Assamese and Georgian, following the standard WSJ s5 HMM/GMM recipe of Kaldi. The WSJ s5 nnet3 recipe of Kaldi is then used to train the AG-RNN-LID and AG-RNN-MLT systems." ], "extractive_spans": [], "free_form_answer": "The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system. ", "highlighted_evidence": [ "As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).", "The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "annotation_id": [ "da0ccad7db567240cbde50e6f151ad33ff676979" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "annotation_id": [ "34324af8eb0048a0305387b12bbaaf73a7485779" ], "answer": [ { "evidence": [ "All the present neural LID methods are based on acoustic features, e.g., Mel filter banks (Fbanks) or Mel frequency cepstral coefficients (MFCCs), with phonetic information largely overlooked. This may have significantly hindered the performance of neural LID. Intuitively, it is a long-standing hypothesis that languages can be discriminated between by phonetic properties, either distributional or temporal; additionally, phonetic features represent information at a higher level than acoustic features, and so are more invariant with respect to noise and channels. Pragmatically, it has been demonstrated that phonetic information, either in the form of phone sequences, phone posteriors, or phonetic bottleneck features, can significantly improve LID accuracy in both the conventional PRLM approach BIBREF11 and the more modern i-vector system BIBREF34 , BIBREF35 , BIBREF36 . In this paper, we will investigate the utilization of phonetic information to improve neural LID. The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. This property was historically widely and successfully applied in token-based approaches, e.g., PRLM BIBREF11 , but has been largely overlooked due to the popularity of the i-vector approach." ], "extractive_spans": [], "free_form_answer": "Proposing an improved RNN model, the phonetic temporal neural LID approach, based on phonetic features that results in better performance", "highlighted_evidence": [ "The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "five", "five", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Which is the baseline model?", "How big is the Babel database?", "What is the main contribution of the paper? " ], "question_id": [ "7b89515d731d04dd5cbfe9c2ace2eb905c119cbc", "1db37e98768f09633dfbc78616992c9575f6dba4", "79a28839fee776d2fed01e4ac39f6fedd6c6a143" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "language model", "language model", "" ], "topic_background": [ "familiar", "familiar", "unfamiliar" ] }
{ "caption": [ "TABLE I: LID methods with deep learning involvement.", "Fig. 1: LID models employing phonetic information: (a) the phonetically aware model; (b) the PTN model. Both models consist of a phonetic DNN (left) to produce phonetic features and an LID RNN (right) to make LID decisions.", "Fig. 2: The LSTM model for the study. The picture is reproduced from [41].", "Fig. 3: The phonetically aware RNN LID system (top) and the PTN LID system (bottom). The phonetic feature is read from the last hidden layer of the phonetic DNN which is a TDNN. The phonetic feature is then propagated to the g function for the phonetically aware RNN LID system, and is the only input for the PTN LID system.", "TABLE II: Results of the baseline LID systems for Babel AG.", "TABLE III: Results of phonetically aware RNN LID for Babel AG.", "TABLE IV: Results of PTN LID and phonetically aware RNN LID with both phonetic and acoustic features for Babel AG.", "TABLE V: Results of deeper LID models for Babel AG.", "TABLE VI: Results of various LID systems on the 7 languages in Babel.", "TABLE VII: Results of various LID systems on the 7 languages in AP16-OLR.", "Fig. 4: Comparison of the effect of utterance duration on ivector LID and PTN LID in terms of utterance Cavg (above) and EER (bottom).", "Fig. 5: Duration distribution of the test utterances of the Babel database and the AP16-OLR database.", "Fig. 6: Performance degradation rate of i-vector LID and PTN LID with noise in terms of utterance Cavg (left) and EER (right).", "TABLE VIII: Results of i-vector LID and PTN LID with different levels of noise." ], "file": [ "3-TableI-1.png", "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "6-TableII-1.png", "7-TableIII-1.png", "7-TableIV-1.png", "8-TableV-1.png", "9-TableVI-1.png", "9-TableVII-1.png", "9-Figure4-1.png", "10-Figure5-1.png", "10-Figure6-1.png", "10-TableVIII-1.png" ] }
[ "Which is the baseline model?", "What is the main contribution of the paper? " ]
[ [ "1705.03151-Babel: baseline of bilingual LID-2", "1705.03151-Babel: baseline of bilingual LID-0" ], [ "1705.03151-Motivation of the paper-0" ] ]
[ "The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system. ", "Proposing an improved RNN model, the phonetic temporal neural LID approach, based on phonetic features that results in better performance" ]
447
1810.13024
Bi-Directional Lattice Recurrent Neural Networks for Confidence Estimation
The standard approach to mitigate errors made by an automatic speech recognition system is to use confidence scores associated with each predicted word. In the simplest case, these scores are word posterior probabilities whilst more complex schemes utilise bi-directional recurrent neural network (BiRNN) models. A number of upstream and downstream applications, however, rely on confidence scores assigned not only to 1-best hypotheses but to all words found in confusion networks or lattices. These include but are not limited to speaker adaptation, semi-supervised training and information retrieval. Although word posteriors could be used in those applications as confidence scores, they are known to have reliability issues. To make improved confidence scores more generally available, this paper shows how BiRNNs can be extended from 1-best sequences to confusion network and lattice structures. Experiments are conducted using one of the Cambridge University submissions to the IARPA OpenKWS 2016 competition. The results show that confusion network and lattice-based BiRNNs can provide a significant improvement in confidence estimation.
{ "paragraphs": [ [ " Recent years have seen an increased usage of spoken language technology in applications ranging from speech transcription BIBREF0 to personal assistants BIBREF1 . The quality of these applications heavily depends on the accuracy of the underlying automatic speech recognition (ASR) system yielding 1-best hypotheses and how well ASR errors are mitigated. The standard approach to ASR error mitigation is confidence scores BIBREF2 , BIBREF3 . A low confidence can give a signal to downstream applications about the high uncertainty of the ASR in its prediction and measures can be taken to mitigate the risk of making a wrong decision. However, confidence scores can also be used in upstream applications such as speaker adaptation BIBREF4 and semi-supervised training BIBREF5 , BIBREF6 to reflect uncertainty among multiple possible alternative hypotheses. Downstream applications, such as machine translation and information retrieval, could similarly benefit from using multiple hypotheses.", "A range of confidence scores has been proposed in the literature BIBREF3 . In the simplest case, confidence scores are posterior probabilities that can be derived using approaches such as confusion networks BIBREF7 , BIBREF8 . These posteriors typically significantly over-estimate confidence BIBREF8 . Therefore, a number of approaches have been proposed to rectify this problem. These range from simple piece-wise linear mappings given by decision trees BIBREF8 to more complex sequence models such as conditional random fields BIBREF9 , and to neural networks BIBREF10 , BIBREF11 , BIBREF12 . Though improvements over posterior probabilities on 1-best hypotheses were reported, the impact of these approaches on all hypotheses available within confusion networks and lattices has not been investigated.", "Extending confidence estimation to confusion network and lattice structures can be straightforward for some approaches, such as decision trees, and challenging for others, such as recurrent forms of neural networks. The previous work on encoding graph structures into neural networks BIBREF13 has mostly focused on embedding lattices into a fixed dimensional vector representation BIBREF14 , BIBREF15 . This paper examines a particular example of extending a bi-directional recurrent neural network (BiRNN) BIBREF16 to confusion network and lattice structures. This requires specifying how BiRNN states are propagated in the forward and backward directions, how to merge a variable number of BiRNN states, and how target confidence values are assigned to confusion network and lattice arcs. The paper shows that the state propagation in the forward and backward directions has close links to the standard forward-backward algorithm BIBREF17 . This paper proposes several approaches for merging BiRNN states, including an attention mechanism BIBREF18 . Finally, it describes a Levenshtein algorithm for assigning targets to confusion networks and an approximate solution for lattices. Combined these make it possible to assign confidence scores to every word hypothesised by the ASR, not just from a single extracted hypothesis.", "The rest of this paper is organised as follows. Section \"Bi-Directional Recurrent Neural Network\" describes the use of bi-directional recurrent neural networks for confidence estimation in 1-best hypotheses. Section \"Confusion Network and Lattice Extensions\" describes the extension to confusion network and lattice structures. Experimental results are presented in Section \"Experiments\" . The conclusions drawn from this work are given in Section \"Conclusions\" ." ], [ " Fig. 1 shows the simplest form of the BiRNN BIBREF16 . Unlike its uni-directional version, the BiRNN makes use of two recurrent states, one going in the forward direction in time $\\overrightarrow{\\mathbf {h}}_{t}$ and another in the backward direction $\\overleftarrow{\\mathbf {h}}_{t}$ to model past (history) and future information respectively.", "The past information can be modelled by ", "$$\\overrightarrow{\\mathbf {h}}_{t} = \\sigma (\\mathbf { W}^{(\\overrightarrow{{h}})}\\overrightarrow{\\mathbf {h}}_{t-1} + \\mathbf { W}^{(x)}\\mathbf {x}_{t})$$ (Eq. 4) ", "where $\\mathbf {x}_{t}$ is an input feature vector at time $t$ , $\\mathbf {W}^{(x)}$ is an input matrix, $\\mathbf {W}^{(\\overrightarrow{{h}})}$ is a history matrix and $\\sigma $ is an element-wise non-linearity such as a sigmoid. The future information is typically modelled in the same way. At any time $t$ the confidence $c_t$ can be estimated by ", "$$c_{t} = \\sigma (\\mathbf {w}^{(c)^{\\sf T}}{\\bf h}_{t} + {b}^{(c)})$$ (Eq. 5) ", "where $\\mathbf {w}^{c}$ and $b^{(b)}$ are a parameter vector and a bias, $\\sigma $ is any non-linearity that maps confidence score into the range $[0,1]$ and $\\mathbf {h}_{t}$ is a context vector that combines the past and future information. ", "$$\\mathbf {h}_{t} = \\begin{bmatrix}\\overrightarrow{\\bf h}_{t} & \\overleftarrow{\\bf h}_{t}\\end{bmatrix}^{\\sf T}$$ (Eq. 6) ", "The input features $\\mathbf {x}_{t}$ play a fundamental role in the model's ability to assign accurate confidence scores. Numerous hand-crafted features have been proposed BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . In the simplest case, duration and word posterior probability can be used as input features. More complex features may include embeddings BIBREF23 , acoustic and language model scores and other information. The BiRNN can be trained by minimising the binary cross-entropy ", "$$H(\\mathbf {c},\\mathbf {c}^{*};\\mathbf {\\theta }) = -\\dfrac{1}{T}\\sum _{t=1}^{T} \\Big \\lbrace {c}_{t}^{*} \\log (c_{t}) + (1 - {c}_{t}^{*}) \\log (1 - c_{t})\\Big \\rbrace $$ (Eq. 7) ", "where $c_{t}$ is a predicted confidence score for time slot $t$ and $c_{t}^{*}$ is the associated reference value. The reference values can be obtained by aligning the 1-best ASR output and reference text using the Levenshtein algorithm. Note that deletion errors cannot be handled under this framework and need to be treated separately BIBREF22 , BIBREF12 . This form of BiRNN has been examined for confidence estimation in BIBREF11 , BIBREF12 ", "The perfect confidence estimator would assign scores of one and zero to correctly and incorrectly hypothesised words respectively. In order to measure the accuracy of confidence predictions, a range of metrics have been proposed. Among these, normalised cross-entropy (NCE) is the most frequently used BIBREF24 . NCE measures the relative change in the binary cross-entropy when the empirical estimate of ASR correctness, $P_c$ , is replaced by predicted confidences $\\mathbf {c}={c_1,\\ldots ,c_T}$ . Using the definition of binary cross-entropy in Eqn. 7 , NCE can be expressed as ", "$$\\text{NCE}(\\mathbf {c},\\mathbf {c^*}) =\n\\dfrac{H(P_{c}\\cdot \\textbf {1},\\mathbf {c^*}) - H(\\mathbf {c},\\mathbf {c^*})}{H(P_{c}\\cdot \\textbf {1},\\mathbf {c^*})}$$ (Eq. 8) ", "where $\\mathbf {1}$ is a length $T$ vector of ones, and the empirical estimate of ASR correctness is given by ", "$$P_{c} = \\dfrac{1}{T}\\sum _{t=1}^{T} {c}_{t}^{*}$$ (Eq. 9) ", "When hypothesised confidence scores $\\mathbf {c}$ are systematically better than the estimate of ASR correctness $P_c$ , NCE is positive. In the limit of perfect confidence scores, NCE approaches one.", "NCE alone is not always the most optimal metric for evaluating confidence estimators. This is because the theoretical limit of correct words being assigned a score of one and incorrect words a score of zero is not necessary for perfect operation of an upstream or downstream application. Often it is sufficient that the rank ordering of the predictions is such that all incorrect words fall below a certain threshold, and all correct words above. This is the case, for instance, in various information retrieval tasks BIBREF25 , BIBREF26 . A more suitable metric in such cases could be an area under a curve (AUC)-type metric. For balanced data the chosen curve is often the receiver operation characteristics (ROC). Whereas for imbalanced data, as is the case in this work, the precision-recall (PR) curve is normally used BIBREF27 . The PR curve is obtained by plotting precision versus recall ", "$$\\text{Precision}(\\theta ) = \\dfrac{\\text{TP}(\\theta )}{\\text{TP}(\\theta )+\\text{FP}(\\theta )},\\;\n\\text{Recall}(\\theta ) = \\dfrac{\\text{TP}(\\theta )}{\\text{TP}(\\theta ) + \\text{FN}(\\theta )}$$ (Eq. 10) ", "for a range of thresholds $\\theta $ , where TP are true positives, FP and FN are false positives and negatives. When evaluating performance on lattices and confusion networks, these metrics are computed across all arcs in the network." ], [ " A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words. This section describes an extension of BiRNNs to CNs and lattices.", "Fig. 2 shows that compared to 1-best sequences in Fig. 2 , each node in a CN may have multiple incoming arcs. Thus, a decision needs to be made on how to optimally propagate information to the outgoing arcs. Furthermore, any such approach would need to handle a variable number of incoming arcs. One popular approach BIBREF15 , BIBREF14 is to use a weighted combination ", "$$\\overrightarrow{\\mathbf {h}}_{t} = \\sum _{i} \\alpha _{t}^{(i)} \\overrightarrow{\\mathbf {h}}_{t}^{(i)}$$ (Eq. 14) ", "where $\\overrightarrow{\\mathbf {h}}_{t}^{(i)}$ represents the history information associated with the $i^{\\text{th}}$ arc of the $t^{\\text{th}}$ CN bin and $\\alpha _{t}^{(i)}$ is the associated weight. A number of approaches can be used to set these weights. One simple approach is to set weights of all arcs other than the one with the highest posterior to zero. This yields a model that for 1-best hypotheses has no advantage over BiRNNs in Section \"Bi-Directional Recurrent Neural Network\" . Other simple approaches include average or normalised confidence score $\\alpha _t^{(i)} = c_t^{(i)}/\\sum _{j} c_t^{(j)}$ where $c_{t}^{(i)}$ is a word posterior probability, possibly mapped by decision trees. A more complex approach is an attention mechanism ", "$$\\alpha _{t}^{(i)} = \\dfrac{\\exp (z_{t}^{(i)})}{\\sum _{j} \\exp (z_{t}^{(j)})}, \\;\\text{where } z_{t}^{(i)} = \\sigma \\left({\\mathbf {w}^{(a)}}^{\\sf {T}}\\overrightarrow{\\mathbf {k}}_{t}^{(i)} + b^{(a)}\\right)$$ (Eq. 15) ", "where $\\mathbf {w}^{(a)}$ and $b^{(a)}$ are attention parameters, $\\overrightarrow{\\mathbf {k}}_{t}^{(i)}$ is a key. The choice of the key is important as it helps the attention mechanism decide which information should be propagated. It is not obvious a priori what the key should contain. One option is to include arc history information as well as some basic confidence score statistics ", "$$\\overrightarrow{\\mathbf {k}}_{t}^{(i)} = \\begin{bmatrix}\n\\overrightarrow{\\mathbf {h}}_{t}^{(i)^{\\sf T}} & c_{t}^{(i)} & \\mu _{t} & \\sigma _{t} \\end{bmatrix}^{\\sf T}$$ (Eq. 16) ", "where $\\mu _t$ and $\\sigma _t$ are the mean and standard deviation computed over $c_t^{(i)}$ at time $t$ . At the next $(t+1)^{\\text{th}}$ CN bin the forward information associated with the $i^{\\text{th}}$ arc is updated by ", "$$\\overrightarrow{\\mathbf {h}}_{t+1}^{(i)} = \\sigma (\\mathbf { W}^{(\\overrightarrow{{h}})}\\overrightarrow{\\mathbf {h}}_{t} + \\mathbf { W}^{(x)}\\mathbf {x}_{t+1}^{(i)})$$ (Eq. 17) ", "The confidence score for each CN arc is computed by ", "$$c_{t}^{(i)} = \\sigma (\\mathbf {w}^{(c)^{\\sf T}}{\\bf h}_{t}^{(i)} + {b}^{(c)})$$ (Eq. 18) ", "where ${\\bf h}_{t}^{(i)}$ is an arc context vector ", "$${\\bf h}_{t}^{(i)} = \\begin{bmatrix}\n\\overrightarrow{\\mathbf {h}}_{t}^{(i)} & \\overleftarrow{\\mathbf {h}}_{t}^{(i)}\n\\end{bmatrix}$$ (Eq. 19) ", "A summary of dependencies in this model is shown in Fig. 1 for a CN with 1 arc in the $t^{\\text{th}}$ bin and 2 arcs in the $(t+1)^{\\text{th}}$ bin.", "As illustrated in Fig. 2 , each node in a lattice marks a timestamp in an utterance and each arc represents a hypothesised word with its corresponding acoustic and language model scores. Although lattices do not normally obey a linear graph structure, if they are traversed in the topological order, no changes are required to compute confidences over lattice structures. The way the information is propagated in these graph structures is similar to the forward-backward algorithm BIBREF17 . There, the forward probability at time $t$ is ", "$$\\overrightarrow{h}_{t+1}^{(i)} = \\overrightarrow{h}_{t} x_{t+1}^{(i)}, \\;\\text{where } \\overrightarrow{h}_{t} = \\sum _{j} \\alpha _{i,j} \\overrightarrow{h}_{t}^{(j)}$$ (Eq. 20) ", "Compared to equations Eqn. 14 and Eqn. 17 , the forward recursion employs a different way to combine features $x_{t+1}^{(i)}$ and node states $\\overrightarrow{h}_{t}$ , and maintains stationary weights, i.e. the transition probabilities $\\alpha _{i,j}$ , for combining arc states $\\overrightarrow{h}_{t}^{(j)}$ . In addition, each $\\overrightarrow{h}_{t}^{(i)}$ has a probabilistic meaning which the vector $\\overrightarrow{\\mathbf {h}}_{t}^{(i)}$ does not. Furthermore, unlike in the standard algorithm, the past information at the final node is not constrained to be equal to the future information at the initial node.", "In order to train these models, each arc of a CN or lattice needs to be assigned an appropriate reference confidence value. For aligning a reference word sequence to another sequence, the Levenshtein algorithm can be used. The ROVER method has been used to iteratively align word sequences to a pivot reference sequence to construct CNs BIBREF28 . This approach can be extended to confusion network combination (CNC), which allows the merging of two CNs BIBREF29 . The reduced CNC alignment scheme proposed here uses a reference one-best sequence rather than a CN as the pivot, in order to tag CN arcs against a reference sequence. A soft loss of aligning reference word $\\omega _\\tau $ with the $t^{\\text{th}}$ CN bin is used ", "$$\\ell _{t}(\\omega _{\\tau }) = 1 - P_{t}(\\omega _{\\tau })$$ (Eq. 21) ", "where $P_t(\\omega )$ is a word posterior probability distribution associated with the CN bin at time $t$ . The optimal alignment is then found by minimising the above loss.", "The extension of the Levenshtein algorithm to lattices, though possible, is computationally expensive BIBREF30 . Therefore approximate schemes are normally used BIBREF31 . Common to those schemes is the use of information about the overlap of lattice arcs and time-aligned reference words to compute the loss ", "$$o_{t,\\tau } = \\max \\bigg \\lbrace 0,\\frac{|\\min \\lbrace e_{\\tau }^{*},e_{t}\\rbrace | - |\\max \\lbrace s_{\\tau }^{*},s_{t}\\rbrace |}{|\\max \\lbrace e_{\\tau }^{*},e_{t}\\rbrace |-|\\min \\lbrace s_{\\tau }^{*},s_{t}\\rbrace |}\\bigg \\rbrace $$ (Eq. 22) ", "where $\\lbrace s_t, e_t\\rbrace $ and $\\lbrace s^{*}_{\\tau }, e^{*}_{\\tau }\\rbrace $ are start and end times of lattice arcs and time-aligned words respectively. In order to yield “hard” 0 or 1 loss a threshold can be set either on the loss or the amount of overlap." ], [ " Evaluation was conducted on IARPA Babel Georgian full language pack (FLP). The FLP contains approximately 40 hours of conversational telephone speech (CTS) for training and 10 hours for development. The lexicon was obtained using the automatic approach described in BIBREF32 . The automatic speech recognition (ASR) system combines 4 diverse acoustic models in a single recognition run BIBREF33 . The diversity is obtained through the use of different model types, a tandem and a hybrid, and features, multi-lingual bottlenecks extracted by IBM and RWTH Aachen from 28 languages. The language model is a simple $n$ -gram estimated on acoustic transcripts and web data. As a part of a larger consortium, this ASR system took part in the IARPA OpenKWS 2016 competition BIBREF34 . The development data was used to assess the accuracy of confidence estimation approaches. The data was split with a ratio of $8:1:1$ into training, validation and test sets. The ASR system was used to produce lattices. Confusion networks were obtained from lattices using consensus decoding BIBREF7 . The word error rates of the 1-best sequences are 39.9% for lattices and 38.5% for confusion networks.", "The input features for the standard bi-directional recurrent neural network (BiRNN) and CN-based (BiCNRNN) are decision tree mapped posterior, duration and a 50-dimensional fastText word embedding BIBREF35 estimated from web data. The lattice-based BiRNN (BiLatRNN) makes additional use of acoustic and language model scores. All forms of BiRNNs contain one $[\\overrightarrow{128},\\overleftarrow{128}]$ dimensional bi-directional LSTM layer and one 128 dimensional feed-forward hidden layer. The implementation uses PyTorch library and is available online. For efficient training, model parameters are updated using Hogwild! stochastic gradient descent BIBREF36 , which allows asynchronous update on multiple CPU cores in parallel.", "Table 1 shows the NCE and AUC performance of confidence estimation schemes on 1-best hypotheses extracted from CNs. As expected, “raw” posterior probabilities yield poor NCE results although AUC performance is high. The decision tree, as expected, improves NCE and does not affect AUC due to the monotonicity of the mapping. The BiRNN yields gains over the simple decision tree, which is consistent with the previous work in the area BIBREF11 , BIBREF12 .", "The next experiment examines the extension of BiRNNs to confusion networks. The BiCNRNN uses a similar model topology, merges incoming arcs using the attention mechanism described in Section \"Confusion Network and Lattice Extensions\" and uses the Levenshtein algorithm with loss given by Eqn. 21 to obtain reference confidence values. The model parameters are estimated by minimising average binary cross-entropy loss on all CN arcs. The performance is evaluated over all CN arcs. When transitioning from 1-best arcs to all CN arcs the AUC performance is expected to drop due to an increase in the Bayes risk. Table 2 shows that BiCNRNN yields gains similar to BiRNN in Table 1 .", "As mentioned in Section \"Confusion Network and Lattice Extensions\" there are alternatives to attention for merging incoming arcs. Table 3 shows that mean and normalised posterior weights may provide a competitive alternative.", "Extending BiRNNs to lattices requires making a choice of a loss function and a method of setting reference values to lattice arcs. A simple global threshold on the amount of overlap between reference time-aligned words and lattice arcs is adopted to tag arcs. This scheme yields a false negative rate of 2.2% and false positive rate of 0.9% on 1-best CN arcs and 1.4% and 0.7% on 1-best lattice arcs. Table 4 shows the impact of using approximate loss in training the BiCNRNN. The results suggest that the mismatch between training and testing criteria, i.e. approximate in training and Levenshtein in testing, could play a significant role on BiLatRNN performance. Using this approximate scheme, a BiLatRNN was trained on lattices.", "Table 5 compares BiLatRNN performance to “raw” posteriors and decision trees. As expected, lower AUC performances are observed due to higher Bayes risk in lattices compared to CNs. The “raw” posteriors offer poor confidence estimates as can be seen from the large negative NCE and low AUC. The decision tree yields significant gains in NCE and no change in AUC performance. Note that the AUC for a random classifier on this data is 0.2466. The BiLatRNN yields very large gains in both NCE and AUC performance.", "As mentioned in Section \"Introduction\" , applications such as language learning and information retrieval rely on confidence scores to give high-precision feedback BIBREF37 or high-recall retrieval BIBREF25 , BIBREF26 . Therefore, Fig. 3 shows precision-recall curves for BiRNN in Table 1 and BiLatRNN in Table 5 . Fig. 3 shows that the BiRNN yields largest gain in the region of high precision and low recall which is useful for feedback-like applications. Whereas the BiLatRNN in Fig. 3 can be seen to significantly improve precision in the high recall region, which is useful for some retrieval tasks." ], [ " Confidence scores play an important role in many applications of spoken language technology. The standard form of confidence scores are decision tree mapped word posterior probabilities. A number of approaches have been proposed to improve confidence estimation, such as bi-directional recurrent neural networks (BiRNN). BiRNNs, however, can predict confidences of sequences only, which limits their more general application to 1-best hypotheses. This paper extends BiRNNs to confusion network (CN) and lattice structures. In particular, it proposes to use an attention mechanism to combine variable number of incoming arcs, shows how recursions are linked to the standard forward-backward algorithm and describes how to tag CN and lattice arcs with reference confidence values. Experiments were performed on a challenging limited resource IARPA Babel Georgian pack and shows that the extended forms of BiRNNs yield significant gains in confidence estimation accuracy over all arcs in CNs and lattices. Many related applications like information retrieval, speaker adaptation, keyword spotting and semi-supervised training will benefit from the improved confidence measure." ] ], "section_name": [ "Introduction", "Bi-Directional Recurrent Neural Network", "Confusion Network and Lattice Extensions", "Experiments", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "34b7458c20d11f1434b445401fc2b8d83829c213" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Fig. 2: Standard ASR outputs", "A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words. This section describes an extension of BiRNNs to CNs and lattices." ], "extractive_spans": [], "free_form_answer": "graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences", "highlighted_evidence": [ "FLOAT SELECTED: Fig. 2: Standard ASR outputs", "A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words. ", "Fig. 2b shows that compared to 1-best sequences in Fig. 2a, each node in a CN may have multiple incoming arcs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "4857c606a55a83454e8d81ffe17e05cf8bc4b75f" ] } ], "nlp_background": [ "two" ], "paper_read": [ "no" ], "question": [ "What is a confusion network or lattice?" ], "question_id": [ "521a7042b6308e721a7c8046be5084bc5e8ca246" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "" ], "topic_background": [ "unfamiliar" ] }
{ "caption": [ "Fig. 1: Bi-directional neural networks for confidence estimation", "Fig. 2: Standard ASR outputs", "Table 1: Confidence estimation performance on 1-best CN arcs", "Table 5: Confidence estimation performance on all lattice arcs", "Table 2: Confidence estimation performance on all CN arcs", "Table 3: Comparison of BiCNRNN arc merging mechanisms", "Fig. 3: Precision-recall curves for Table 1 and Table 5", "Table 4: Comparison of BiCNRNN arc tagging schemes" ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "4-Table1-1.png", "4-Table5-1.png", "4-Table2-1.png", "4-Table3-1.png", "4-Figure3-1.png", "4-Table4-1.png" ] }
[ "What is a confusion network or lattice?" ]
[ [ "1810.13024-2-Figure2-1.png" ] ]
[ "graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences" ]
450
1910.08987
Representation Learning for Discovering Phonemic Tone Contours
Tone is a prosodic feature used to distinguish words in many languages, some of which are endangered and scarcely documented. In this work, we use unsupervised representation learning to identify probable clusters of syllables that share the same phonemic tone. Our method extracts the pitch for each syllable, then trains a convolutional autoencoder to learn a low dimensional representation for each contour. We then apply the mean shift algorithm to cluster tones in high-density regions of the latent space. Furthermore, by feeding the centers of each cluster into the decoder, we produce a prototypical contour that represents each cluster. We apply this method to spoken multi-syllable words in Mandarin Chinese and Cantonese and evaluate how closely our clusters match the ground truth tone categories. Finally, we discuss some difficulties with our approach, including contextual tone variation and allophony effects.
{ "paragraphs": [ [ "Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean `one', `to move', `already', or `art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal BIBREF0, BIBREF1. A few of these are national languages (e.g., Mandarin Chinese, Vietnamese, and Thai), but many tonal languages have a small number of speakers and are scarcely documented. There is a limited availability of trained linguists to perform language documentation before these languages become extinct, hence the need for better tools to assist linguists in these tasks.", "One of the first tasks during the description of an unfamiliar language is determining its phonemic inventory: what are the consonants, vowels, and tones of the language, and which pairs of phonemes are contrastive? Tone presents a unique challenge because unlike consonants and vowels, which can be identified in isolation, tones do not have a fixed pitch, and vary by speaker and situation. Since tone data is subject to interpretation, different linguists may produce different descriptions of the tone system of the same language BIBREF1.", "In this work, we present a model to automatically infer phonemic tone categories of a tonal language. We use an unsupervised representation learning and clustering approach, which requires only a set of spoken words in the target language, and produces clusters of syllables that probably have the same tone. We apply our method on Mandarin Chinese and Cantonese datasets, for which the ground truth annotation is used for evaluation. Our method does not make any language-specific assumptions, so it may be applied to low-resource languages whose phonemic inventories are not already established." ], [ "Mandarin Chinese (1.1 billion speakers) and Cantonese (74 million speakers) are two tonal languages in the Sinitic family BIBREF0. Mandarin has four lexical tones: high (55), rising (25), low-dipping (214), and falling (51). The third tone sometimes undergoes sandhi, addressed in section SECREF3. We exclude a fifth, neutral tone, which can only occur in word-final positions and has no fixed pitch.", "Cantonese has six lexical tones: high-level (55), mid-rising (25), mid-level (33), low-falling (21), low-rising (23), and low-level (22). Some descriptions of Cantonese include nine tones, of which three are checked tones that are flat, shorter in duration, and only occur on syllables ending in /p/, /t/, or /k/. Since each one of the checked tones are in complementary distribution with an unchecked tone, we adopt the simpler six tone model that treats the checked tones as variants of the high, mid, and low level tones. Contours for the lexical tones in both languages are shown in Figure FIGREF2." ], [ "Many low-resource languages lack sufficient transcribed data for supervised speech processing, thus unsupervised models for speech processing is an emerging area of research. The Zerospeech 2015 and 2017 challenges featured unsupervised learning of contrasting phonemes in English and Xitsonga, evaluated by an ABX phoneme discrimination task BIBREF3. One successful approach used denoising and correspondence autoencoders to learn a representation that avoided capturing noise and irrelevant inter-speaker variation BIBREF4. Deep LSTMs for segmenting and clustering phonemes in speech have also been explored in BIBREF5 and BIBREF6.", "In Mandarin Chinese, deep neural networks have been successful for tone classification in isolated syllables BIBREF7 as well as in continuous speech BIBREF8, BIBREF9. Both of these models found that Mel-frequency cepstral coefficients (MFCCs) outperformed pitch contour features, despite the fact that MFCC features do not contain pitch information. In Cantonese, support vector machines (SVMs) have been applied to classify tones in continuous speech, using pitch contours as input BIBREF10.", "Unsupervised learning of tones remains largely unexplored. Levow BIBREF11 performed unsupervised and semi-supervised tone clustering in Mandarin, using average pitch and slope as features, and $k$-means and asymmetric $k$-lines for clustering. Graph-based community detection techniques have been applied to group $n$-grams of contiguous contours into clusters in Mandarin BIBREF12. Our work appears to be the first model to use unsupervised deep neural networks for phonemic tone clustering." ], [ "We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables.", "We extract ground-truth tones for evaluation purposes. In Mandarin, the tones are extracted from the pinyin transcription; in Cantonese, we reference the character entries on Wiktionary to retrieve the romanized pronunciation and tones. For Mandarin, we correct for third-tone sandhi (a phonological rule where a pair of consecutive third-tones is always realized as a second-tone followed by a third-tone). We also exclude the neutral tone, which has no fixed pitch and is sometimes thought of as a lack of tone." ], [ "We use Praat's autocorrelation-based pitch estimation algorithm to extract the fundamental frequency (F0) contour for each sample, using a minimum frequency of 75Hz and a maximum frequency of 500Hz BIBREF13. The interface between Python and Praat is handled using Parselmouth BIBREF14. We normalize the contour to be between 0 and 1, based on the speaker's pitch range.", "Next, we segment each speech sample into syllables, which is necessary because syllable boundaries are not provided in our datasets. This is done using a simple heuristic that detects continuously voiced segments, and manual annotation where the heuristic fails. To obtain a constant length pitch contour as input to our model, we sample the pitch at 40 equally spaced points. Note that by sampling a variable length contour to a constant length, information about syllable length is lost; this is acceptable because we consider tones which differ on length as variations of the same tone." ], [ "We use a convolutional autoencoder (Figure FIGREF4) to learn a two-dimensional latent vector for each syllable. Convolutional layers are widely used in computer vision and speech processing to learn spatially local features that are invariant of position. We use a low dimensional latent space so that the model learns to generate a representation that only captures the most important aspects of the input contour, and also because clustering algorithms tend to perform poorly in high dimensional spaces.", "Our encoder consists of three layers. The first layer applies 2 convolutional filters (kernel size 4, stride 1) followed by max pooling (kernel size 2) and a tanh activation. The second layer applies 4 convolutional filters (kernel size 4, stride 1), again with max pooling (kernel size 2) and a tanh activation. The third layer is a fully connected layer with two dimensional output. Our decoder is the encoder in reverse, consisting of one fully connected layer and two deconvolution layers, with the same layer shapes as the encoder.", "We train the autoencoder using PyTorch BIBREF15, for 500 epochs, with a batch size of 60. The model is optimized using Adam BIBREF16 with a learning rate of 5e-4 to minimize the mean squared error between the input and output contours." ], [ "We run the encoder on each syllable's pitch contour to get their latent representations; we apply principal component analysis (PCA) to remove any correlation between the two dimensions. Then, we run mean shift clustering BIBREF17, BIBREF18, estimating a probability density function in the latent space. The procedure performs gradient ascent on all the points until they converge to a set of stationary points, which are local maxima of the density function. These stationary points are taken to be cluster centers, and points that converge to the same stationary point belong to the same cluster.", "Unlike $k$-means clustering, the mean shift procedure does not require the number of clusters to be specified, only a bandwidth parameter (set to 0.6 for our experiments). The cluster centers are always in regions of high density, so they can be viewed as prototypes that represent their respective clusters. Another advantage is that unlike $k$-means, mean shift clustering is robust to outliers.", "Although the mean shift procedure technically assigns every point to a cluster, not all such clusters are linguistically plausible as phonemic tones, because they contain very few points. Thus, we take only clusters larger than a threshold, determined empirically from the distribution of cluster sizes; the rest are considered spurious clusters and we treat them as unclustered. Finally, we feed the remaining cluster centers into the decoder to generate a prototype pitch contour for each cluster." ], [ "Figure FIGREF9 shows the latent space learned by the autoencoders and the clustering output. Our model found 4 tone clusters in Mandarin, matching the number of phonemic tones (Table TABREF12) and 5 in Cantonese, which is one fewer than the number of phonemic tones (Table TABREF13). In Mandarin, the 4 clusters correspond very well with the the 4 phonemic tone categories, and the generated contours closely match the ground truth in Figure FIGREF2. There is some overlap between tones 3 and 4; this is because tone 3 is sometimes realized a low-falling tone without the final rise, a process known as half T3 sandhi BIBREF19, thus, it may overlap with tone 4 (falling tone).", "In Cantonese, the 5 clusters A-E correspond to low-falling, mid-level, high-level, mid-rising, and low-rising tones. Tone clustering in Cantonese is expected to be more difficult than in Mandarin because of 6 contrastive tones, rather than 4. The model is more effective at clustering the higher tones (1, 2, 3), and less effective at clustering the lower tones (4, 5, 6), particularly tone 4 (low-falling) and tone 6 (low-level). This confirms the difficulties in prior work, which reported worse classification accuracy on the lower-pitched tones because the lower region of the Cantonese tone space is more crowded than the upper region BIBREF10.", "Two other sources of error are carry-over and declination effects. A carry-over effect is when the pitch contour of a tone undergoes contextual variation depending on the preceding tone; strong carry-over effects have been observed in Mandarin BIBREF20. Prior work BIBREF11 avoided carry-over effects by using only the second half of every syllable, but we do not consider language-specific heuristics in our model. Declination is a phenomenon in which the pitch declines over an utterance BIBREF1, BIBREF10. This is especially a problem in Cantonese, which has tones that differ only on pitch level and not contour: for example, a mid-level tone near the end of a phrase may have the same absolute pitch as a low-level tone at the start of a phrase.", "To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables." ], [ "We propose a model for unsupervised clustering and discovery of phonemic tones in tonal languages, using spoken words as input. Our model extracts the F0 pitch contour, trains a convolutional autoencoder to learn a low-dimensional representation for each contour, and applies mean shift clustering to the resulting latent space. We obtain promising results with both Mandarin Chinese and Cantonese, using only 400 spoken words from each language. Cantonese presents more difficulties because of its larger number of tones, especially at the lower half of the pitch range, and also due to multiple contrastive level tones. Finally, we briefly explore the influence of contextual variation on our model.", "A limitation of this study is that our model only considers pitch, which is only one aspect of tone. In reality, pitch is determined not only by tone, but by a complex mixture of intonation, stress, and other prosody effects. Tone is not a purely phonetic property – it is impossible to determine on a phonetic basis whether two pitch contours have distinct underlying tones, or are variants of the same underlying tone (perhaps in complementary distribution). Instead, two phonemic tones can be shown to be contrastive only by providing a minimal pair, where two semantically different lexical items are identical in every respect other than their tones. The last problem is not unique to tone: similar difficulties have been noted when attempting to identify consonant and vowel phonemes automatically BIBREF21. In future work, we plan to further explore these issues and develop more nuanced models to learn tone from speech." ], [ "We thank Prof Gerald Penn for his help suggestions during this project. Rudzicz is a CIFAR Chair in AI." ] ], "section_name": [ "Introduction", "Introduction ::: Tone in Mandarin and Cantonese", "Related Work", "Data and Preprocessing", "Data and Preprocessing ::: Pitch extraction and syllable segmentation", "Model ::: Convolutional autoencoder", "Model ::: Mean shift clustering", "Results", "Conclusions and future work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "ab5cab754878813810a65c4344fb95cac96ea3a3" ], "answer": [ { "evidence": [ "We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables." ], "extractive_spans": [ "Mandarin dataset", "Cantonese dataset" ], "free_form_answer": "", "highlighted_evidence": [ "We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3525ebb096d0bcc39372bed06d8d0eed6e71743e" ], "answer": [ { "evidence": [ "To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables.", "FLOAT SELECTED: Table 3. Normalized mutual information (NMI) between cluster assignments and ground truth tones, considering only the first syllable of each word, or all syllables." ], "extractive_spans": [], "free_form_answer": "NMI between cluster assignments and ground truth tones for all sylables is:\nMandarin: 0.641\nCantonese: 0.464", "highlighted_evidence": [ "To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables.", "FLOAT SELECTED: Table 3. Normalized mutual information (NMI) between cluster assignments and ground truth tones, considering only the first syllable of each word, or all syllables." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero" ], "paper_read": [ "no", "no" ], "question": [ "What dataset is used for training?", "How close do clusters match to ground truth tone categories?" ], "question_id": [ "06776b8dfd1fe27b5376ae44436b367a71ff9912", "f1831b2e96ff8ef65b8fde8b4c2ee3e04b7ac4bf" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Fig. 1. Pitch contours for the four Mandarin tones and six Cantonese tones in isolation, produced by native speakers. Figure adapted from [3].", "Fig. 2. Diagram of our model architecture, consisting of a convolutional autoencoder to learn a latent representation for each pitch contour, and mean shift clustering to identify groups of similar tones.", "Fig. 3. Latent space generated by autoencoder and the results of mean shift clustering for Mandarin and Cantonese. Each cluster center is fed through the decoder to generate the corresponding pitch contour. The clusters within each language are ordered by size, from largest to smallest.", "Table 3. Normalized mutual information (NMI) between cluster assignments and ground truth tones, considering only the first syllable of each word, or all syllables.", "Table 1. Cluster and tone frequencies for Mandarin.", "Table 2. Cluster and tone frequencies for Cantonese." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Figure3-1.png", "4-Table3-1.png", "4-Table1-1.png", "4-Table2-1.png" ] }
[ "How close do clusters match to ground truth tone categories?" ]
[ [ "1910.08987-4-Table3-1.png", "1910.08987-Results-3" ] ]
[ "NMI between cluster assignments and ground truth tones for all sylables is:\nMandarin: 0.641\nCantonese: 0.464" ]
451
1701.09123
Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
{ "paragraphs": [ [ "A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be used to refer to 18 different entities, including the continent, the European Union, various Greek mythological entities, a rock band, some music albums, a magazine, a short story, etc. Furthermore, it is possible to refer to a named entity by means of anaphoric pronouns and co-referent expressions such as `he', `her', `their', `I', `the 35 year old', etc. Therefore, in order to provide an adequate and comprehensive account of named entities in text it is necessary to recognize the mention of a named entity and to classify it by a pre-defined type (e.g, person, location, organization). Named Entity Recognition and Classification (NERC) is usually a required step to perform Named Entity Disambiguation (NED), namely to link `Europe' to the right Wikipedia article, and to resolve every form of mentioning or co-referring to the same entity.", "Nowadays NERC systems are widely being used in research for tasks such as Coreference Resolution BIBREF0 , Named Entity Disambiguation BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 for which a lot of interest has been created by the TAC KBP shared tasks BIBREF6 , Machine Translation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , Aspect Based Sentiment Analysis BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , Event Extraction BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 and Event Ordering BIBREF20 .", "Moreover, NERC systems are integrated in the processing chain of many industrial software applications, mostly by companies offering specific solutions for a particular industrial sector which require recognizing named entities specific of their domain. There is therefore a clear interest in both academic research and industry to develop robust and efficient NERC systems: For industrial vendors it is particularly important to diversify their services by including NLP technology for a variety of languages whereas in academic research NERC is one of the foundations of many other NLP end-tasks.", "Most NERC taggers are supervised statistical systems that extract patterns and term features which are considered to be indications of Named Entity (NE) types using the manually annotated training data (extracting orthographic, linguistic and other types of evidence) and often external knowledge resources. As in other NLP tasks, supervised statistical NERC systems are more robust and obtain better performance on available evaluation sets, although sometimes the statistical models can also be combined with specific rules for some NE types. For best performance, supervised statistical approaches require manually annotated training data, which is both expensive and time-consuming. This has seriously hindered the development of robust high performing NERC systems for many languages but also for other domains and text genres BIBREF21 , BIBREF22 , in what we will henceforth call `out-of-domain' evaluations.", "Moreover, supervised NERC systems often require fine-tuning for each language and, as some of the features require language-specific knowledge, this poses yet an extra complication for the development of robust multilingual NERC systems. For example, it is well-known that in German every noun is capitalized and that compounds including named entities are pervasive. This also applies to agglutinative languages such as Basque, Korean, Finnish, Japanese, Hungarian or Turkish. For this type of languages, it had usually been assumed that linguistic features (typically Part of Speech (POS) and lemmas, but also semantic features based on WordNet, for example) and perhaps specific hand-crafted rules, were a necessary condition for good NERC performance as they would allow to capture better the most recurrent declensions (cases) of named entities for Basque BIBREF23 or to address problems such as sparsity and capitalization of every noun for German BIBREF24 , BIBREF25 , BIBREF26 . This language dependency was easy to see in the CoNLL 2002 and 2003 tasks, in which systems participating in the two available languages for each edition obtained in general different results for each language. This suggests that without fine-tuning for each corpus and language, the systems did not generalize well across languages BIBREF27 .", "This paper presents a multilingual and robust NERC system based on simple, general and shallow features that heavily relies on word representation features for high performance. Even though we do not use linguistic motivated features, our approach also works well for inflected languages such as Basque and German. We demonstrate the robustness of our approach by reporting best results for five languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, including seven in-domain and eight out-of-domain evaluations." ], [ "The main contributions of this paper are the following: First, we show how to easily develop robust NERC systems across datasets and languages with minimal human intervention, even for languages with declension and/or complex morphology. Second, we empirically show how to effectively use various types of simple word representation features thereby providing a clear methodology for choosing and combining them. Third, we demonstrate that our system still obtains very competitive results even when the supervised data is reduced by half (even less in some cases), alleviating the dependency on costly hand annotated data. These three main contributions are based on:", "A simple and shallow robust set of features across languages and datasets, even in out-of-domain evaluations.", "The lack of linguistic motivated features, even for languages with agglutinative (e.g., Basque) and/or complex morphology (e.g., German).", "A clear methodology for using and combining various types of word representation features by leveraging public unlabeled data.", "Our approach consists of shallow local features complemented by three types of word representation (clustering) features: Brown clusters BIBREF28 , Clark clusters BIBREF29 and K-means clusters on top of the word vectors obtained by using the Skip-gram algorithm BIBREF30 . We demonstrate that combining and stacking different clustering features induced from various data sources (Reuters, Wikipedia, Gigaword, etc.) allows to cover different and more varied types of named entities without manual feature tuning. Even though our approach is much simpler than most, we obtain the best results for Dutch, Spanish and English and comparable results in German (on CoNLL 2002 and 2003). We also report best results for German using the GermEval 2014 shared task data and for Basque using the Egunkaria testset BIBREF23 .", "We report out-of-domain evaluations in three languages (Dutch, English and Spanish) using four different datasets to compare our system with the best publicly available systems for those languages: Illinois NER BIBREF31 for English, Stanford NER BIBREF32 for English and Spanish, SONAR-1 NERD for Dutch BIBREF33 and Freeling for Spanish BIBREF34 . We outperform every other system in the eight out-of-domain evaluations reported in Section SECREF79 . Furthermore, the out-of-domain results show that our clustering features provide a simple and easy method to improve the robustness of NERC systems.", "Finally, and inspired by previous work BIBREF35 , BIBREF36 we measure how much supervision is required to obtain state of the art results. In Section SECREF75 we show that we can still obtain very competitive results reducing the supervised data by half (and sometimes even more). This, together with the lack of linguistic features, means that our system considerably saves data annotation costs, which is quite convenient when trying to develop a NERC system for a new language and/or domain.", "Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. Our NERC system is publicly available and distributed under the Apache 2.0 License and part of the IXA pipes tools BIBREF38 . Every result reported in this paper is obtained using the conlleval script from the CoNLL 2002 and CoNLL 2003 shared tasks. To guarantee reproducibility of results we also make publicly available the models and the scripts used to perform the evaluations. The system, models and evaluation scripts can be found in the ixa-pipe-nerc website.", "Next Section reviews related work, focusing on best performing NERC systems for each language evaluated on standard shared evaluation task data. Section SECREF3 presents the design of our system and our overall approach to NERC. In Section SECREF4 we report the evaluation results obtained by our system for 5 languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, distributed in 7 in-domain and 8 out-of-domain evaluations. Section SECREF5 discusses the results and contributions of our approach. In Section SECREF6 we highlight the main aspects of our work providing some concluding remarks and future work to be done using our NERC approach applied to other text genres, domains and sequence labeling tasks." ], [ "The Named Entity Recognition and Classification (NERC) task was first defined for the Sixth Message Understanding Conference (MUC 6) BIBREF39 . The MUC 6 tasks focused on Information Extraction (IE) from unstructured text and NERC was deemed to be an important IE sub-task with the aim of recognizing and classifying nominal mentions of persons, organizations and locations, and also numeric expressions of dates, money, percentage and time. In the following years, research on NERC increased as it was considered to be a crucial source of information for other Natural Language Processing tasks such as Question Answering (QA) and Textual Entailment (RTE) BIBREF39 . Furthermore, while MUC 6 was solely devoted to English as target language, the CoNLL shared tasks (2002 and 2003) boosted research on language independent NERC for 3 additional target languages: Dutch, German and Spanish BIBREF40 , BIBREF41 .", "The various MUC, ACE and CoNLL evaluations provided a very convenient framework to test and compare NERC systems, algorithms and approaches. They provided manually annotated data for training and testing the systems as well as an objective evaluation methodology. Using such framework, research rapidly evolved from rule-based approaches (consisting of manually handcrafted rules) to language independent systems focused on learning supervised statistical models. Thus, while in the MUC 6 competition 5 out of 8 systems were rule-based, in CoNLL 2003 16 teams participated in the English task all using statistical-based NERC BIBREF39 ." ], [ "Table TABREF10 describes the 12 datasets used in this paper. The first half lists the corpora used for in-domain evaluation whereas the lower half contains the out-of-domain datasets. The CoNLL NER shared tasks focused on language independent machine learning approaches for 4 entity types: person, location, organization and miscellaneous entities. The 2002 edition provided manually annotated data in Dutch and Spanish whereas in 2003 the languages were German and English. In addition to the CoNLL data, for English we also use the formal run of MUC 7 and Wikigold for out-of-domain evaluation. Very detailed descriptions of CoNLL and MUC data can easily be found in the literature, including the shared task descriptions themselves BIBREF42 , BIBREF40 , BIBREF41 , so in the following we will describe the remaining, newer datasets.", "The Wikigold corpus consists of 39K words of English Wikipedia manually annotated following the CoNLL 2003 guidelines BIBREF27 . For Spanish and Dutch, we also use Ancora 2.0 BIBREF43 and SONAR-1 BIBREF33 respectively. SONAR-1 is a one million word Dutch corpus with both coarse-grained and fine-grained named entity annotations. The coarse-grained level includes product and event entity types in addition to the four types defined in CoNLL data. Ancora adds date and number types to the CoNLL four main types. In Basque the only gold standard corpus is Egunkaria BIBREF23 . Although the Basque Egunkaria dataset is annotated with four entity types, the miscellaneous class is extremely sparse, occurring only in a proportion of 1 to 10. Thus, in the training data there are 156 entities annotated as MISC whereas each of the other three classes contain around 1200 entities.", "In the datasets described so far, named entities were assumed to be non-recursive and non-overlapping. During the annotation process, if a named entity was embedded in a longer one, then only the longest mention was annotated. The exceptions are the GermEval 2014 shared task data for German and MEANTIME, where nested entities are also annotated (both inner and outer spans).", "The GermEval 2014 NER shared task BIBREF25 aimed at improving the state of the art of German NERC which was perceived to be comparatively lower than the English NERC. Two main extensions were introduced in GermEval 2014; (i) fine grained named entity sub-types to indicate derivations and compounds; (ii) embedded entities (and not only the longest span) are annotated. In total, there are 12 types for classification: person, location, organization, other plus their sub-types annotated at their inner and outer levels.", "Finally, the MEANTIME corpus BIBREF44 is a multilingual (Dutch, English, Italian and Spanish) publicly available evaluation set annotated within the Newsreader project. It consists of 120 documents, divided into 4 topics: Apple Inc., Airbus and Boeing, General Motors, Chrysler and Ford, and the stock market. The articles are selected in such a way that the corpus contains different articles that deal with the same topic over time (e.g. launch of a new product, discussion of the same financial indexes). Moreover, it contains nested entities so the evaluation results will be provided in terms of the outer and the inner spans of the named entities. MEANTIME includes six named entity types: person, location, organization, product, financial and mixed." ], [ "Named entity recognition is a task with a long history in NLP. Therefore, we will summarize those approaches that are most relevant to our work, especially those we will directly compared with in Section SECREF4 . Since CoNLL shared tasks, the most competitive approaches have been supervised systems learning CRF, SVM, Maximum Entropy or Averaged Perceptron models. In any case, while the machine learning method is important, it has also been demonstrated that good performance might largely be due to the feature set used BIBREF45 . Table TABREF13 provides an overview of the features used by previous best scoring approaches for each of the five languages we address in this paper.", "Traditionally, local features have included contextual and orthographic information, affixes, character-based features, prediction history, etc. As argued by the CoNLL 2003 organizers, no feature set was deemed to be ideal for NERC BIBREF41 , although many approaches for English refer to BIBREF46 as a useful general approach.", "Some of the CoNLL participants use linguistic information (POS, lemmas, chunks, but also specific rules or patterns) for Dutch and English BIBREF47 , BIBREF45 , although these type of features was deemed to be most important for German, for which the use of linguistic features is pervasive BIBREF25 . This is caused by the sparsity caused by the declension cases, the tendency to form compounds containing named entities and by the capitalization of every noun BIBREF24 . For example, the best system among the 11 participants in GermEval 2014, ExB, uses morphological features and specific suffix lists aimed at capturing frequent patterns in the endings of named entities BIBREF48 .", "In agglutinative languages such as Basque, which contains declension cases for named entities, linguistic features are considered to be a requirement. For example, the country name `Espainia' (Spain in Basque) can occur in several forms, Espainian, Espainiera, Espainiak, Espainiarentzat, Espainiako, and many more. Linguistic information has been used to treat this phenomenon. The only previous work for Basque developed Eihera, a rule-based NERC system formalized as finite state transducers to take into account declension classes BIBREF23 . The features of Eihera include word, lemma, POS, declension case, capitalized lemma, etc. These features are complemented with gazetteers extracted from the Euskaldunon Egunkaria newspaper and semantic information from the Basque WordNet.", "Dictionaries are widely used to inject world knowledge via gazetteer matches as features in machine learning approaches to NERC. The best performing systems carefully compile their own gazetteers from a variety of sources BIBREF47 . BIBREF31 leverage a collection of 30 gazetteers and matches against each one are weighted as a separate feature. In this way they trust each gazetteer to a different degree. BIBREF49 carefully compiled a large collection of English gazetteers extracted from US Census data and Wikipedia and applied them to the process of inducing word embeddings with very good results.", "While it is possible to automatically extract them from various corpora or resources, they still require careful manual inspection of the target data. Thus, our approach only uses off the shelf gazetteers whenever they are publicly available. Furthermore, our method collapses every gazetteer into one dictionary. This means that we only add a feature per token, instead of a feature per token and gazetteer.", "The intuition behind non-local (or global) features is to treat similarly all occurrences of the same named entity in a text. BIBREF47 proposed a method to produce the set of named entities for the whole sentence, where the optimal set of named entities for the sentence is the coherent set of named entities which maximizes the summation of confidences of the named entities in the set. BIBREF31 developed three types of non-local features, analyzing global dependencies in a window of between 200 and 1000 tokens.", "Semi-supervised approaches leveraging unlabeled text had already been applied to improve results in various NLP tasks. More specifically, it had been previously shown how to apply Brown clusters BIBREF28 for Chinese Word Segmentation BIBREF50 , dependency parsing BIBREF35 , NERC BIBREF51 and POS tagging BIBREF36 .", " BIBREF31 used Brown clusters as features obtaining what was at the time the best published result of an English NERC system on the CoNLL 2003 testset. BIBREF52 made a rather exhaustive comparison of Brown clusters, Collobert and Weston's embeddings BIBREF53 and HLBL embeddings BIBREF54 to improve chunking and NERC. They show that in some cases the combination of word representation features was positive but, although they used Ratinov and Roth's (2009) system as starting point, they did not manage to improve over the state of the art. Furthermore, they reported that Brown clustering features performed better than the word embeddings.", " BIBREF49 extend the Skip-gram algorithm to learn 50-dimensional lexicon infused phrase embeddings from 22 different gazetteers and the Wikipedia. The resulting embeddings are used as features by scaling them by a hyper-parameter which is a real number tuned on the development data. BIBREF49 report best results up to date for English NERC on CoNLL 2003 test data, 90.90 F1.", "The best German CoNLL 2003 system (an ensemble) was outperformed by BIBREF24 . They trained the Stanford NER system BIBREF32 , which uses a linear-chain Conditional Random Field (CRF) with a variety of features, including lemma, POS tag, etc. Crucially, they included “distributional similarity” features in the form of Clark clusters BIBREF29 induced from large unlabeled corpora: the Huge German Corpus (HGC) of around 175M tokens of newspaper text and the deWac corpus BIBREF55 consisting of 1.71B tokens of web-crawled data. Using the clusters induced from deWac as a form of semi-supervision improved the results over the best CoNLL 2003 system by 4 points in F1.", "The best participant of the English CoNLL 2003 shared task used the results of two externally trained NERC taggers to create an ensemble system BIBREF56 . BIBREF49 develop a stacked linear-chain CRF system: they train two CRFs with roughly the same features; the second CRF can condition on the predictions made by the first CRF. Their “baseline” system uses a similar local featureset as Ratinov and Roth's (2009) but complemented with gazetteers. Their baseline system combined with their phrase embeddings trained with infused lexicons allow them to report the best CoNLL 2003 result so far.", "The best system of the GermEval 2014 task built an ensemble of classifiers and pattern extractors to find the most likely tag sequence BIBREF48 . They paid special attention to out of vocabulary words which are addressed by semi-supervised word representation features and an ensemble of POS taggers. Furthermore, remaining unknown candidate mentions are tackled by look-up via the Wikipedia API.", "Apart from the feature types, the last two columns of Table TABREF13 refer to whether the systems are publicly available and whether any external resources used for training are made available (e.g., induced word embeddings, gazetteers or corpora). This is desirable to be able to re-train the systems on different datasets. For example, we would have been interested in training the Stanford NER system with the full Ancora corpus for the evaluation presented in Table TABREF85 , but their Spanish cluster lexicon is not available. Alternatively, we would have liked to train our system with the same Ancora partition used to train Stanford NER, but that is not available either." ], [ "The design of ixa-pipe-nerc aims at establishing a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations (POS tags, lemmas, syntax, semantics) and/or cascading errors if automatic language processors are used. The underlying motivation is to obtain robust models to facilitate the development of NERC systems for other languages and datasets/domains while obtaining state of the art results. Our system consists of:", "Table TABREF24 provides an example of the features generated by our system." ], [ "The local features constitute our baseline system on top of which the clustering features are added. We implement the following feature set, partially inspired by previous work BIBREF46 :", "Token: Current lowercase token (w), namely, ekuadorko in Table TABREF24 .", "Token Shape: Current lowercase token (w) plus current token shape (wc), where token shape consist of: (i) The token is either lowercase or a 2 digit word or a 4 digit word; (ii) If the token contains digits, then whether it also contains letters, or slashes, or hyphens, or commas, or periods or is numeric; (iii) The token is all uppercase letters or is an acronym or is a one letter uppercase word or starts with capital letter. Thus, in Table TABREF24 1994an is a 4 digit word (4d), Ekuadorko has an initial capital shape (ic) and hiriburuan is lowercase (lc).", "Previous prediction: the previous outcome (pd) for the current token. The previous predictions in our example are null because these words have not been seen previously, except for the comma.", "Sentence: Whether the token is the beginning of the sentence. None of the tokens in our example is at the beginning of the sentence, so this feature is not active in Table TABREF24 .", "Prefix: Two prefixes consisting of the first three and four characters of the current token: Eku and Ekua.", "Suffix: The four suffixes of length one to four from the last four characters of the current token.", "Bigram: Bigrams including the current token and the token shape.", "Trigram: Trigrams including the current token and the token shape.", "Character n-gram: All lowercase character bigrams, trigrams, fourgrams and fivegrams from the current token (ng).", "Token, token shape and previous prediction features are placed in a 5 token window, namely, for these these three features we also consider the previous and the next two words, as shown in Table TABREF24 ." ], [ "We add gazetteers to our system only if they are readily available to use, but our approach does not fundamentally depend upon them. We perform a look-up in a gazetteer to check if a named entity occurs in the sentence. The result of the look-up is represented with the same encoding chosen for the training process, namely, the BIO or BILOU scheme. Thus, for the current token we add the following features:", "The current named entity class in the encoding schema. Thus, in the BILOU encoding we would have “unit”, “beginning”, “last”, “inside”, or if not match is found, “outside”, combined with the specific named entity type (LOC, ORG, PER, MISC, etc.).", "The current named entity class as above and the current token." ], [ "The general idea is that by using some type of semantic similarity or word cluster induced over large unlabeled corpora it is possible to improve the predictions for unseen words in the test set. This type of semi-supervised learning may be aimed at improving performance over a fixed amount of training data or, given a fixed target performance level, to establish how much supervised data is actually required to reach such performance BIBREF35 .", "So far the most successful approaches have only used one type of word representation BIBREF49 , BIBREF24 , BIBREF31 . However, our simple baseline combined with one type of word representation features are not able to compete with previous, more complex, systems. Thus, instead of encoding more elaborate features, we have devised a simple method to combine and stack various types of clustering features induced over different data sources or corpora. In principle, our method can be used with any type of word representations. However, for comparison purposes, we decided to use word representations previously used in successful NERC approaches: Brown clusters BIBREF31 , BIBREF52 , Word2vec clusters BIBREF49 and Clark clusters BIBREF32 , BIBREF24 . As can be observed in Table TABREF24 , our clustering features are placed in a 5 token window.", "The Brown clustering algorithm BIBREF28 is a hierarchical algorithm which clusters words to maximize the mutual information of bigrams. Thus, it is a class-based bigram model in which:", "The probability of a document corresponds to the product of the probabilities of its bigrams,", "the probability of each bigram is calculated by multiplying the probability of a bigram model over latent classes by the probability of each class generating the actual word types in the bigram, and", "each word type has non-zero probability only on a single class.", "The Brown algorithm takes a vocabulary of words to be clustered and a corpus of text containing these words. It starts by assigning each word in the vocabulary to its own separate cluster, then iteratively merges the pair of clusters which leads to the smallest decrease in the likelihood of the text corpus. This produces a hierarchical clustering of the words, which is usually represented as a binary tree, as shown in Figure FIGREF44 . In this tree every word is uniquely identified by its path from the root, and the path can be represented by a bit string. It is also possible to choose different levels of word abstraction by choosing different depths along the path from the root to the word. Therefore, by using paths of various lengths, we obtain clustering features of different granularities BIBREF57 .", "We use paths of length 4, 6, 10 and 20 as features BIBREF31 . However, we introduce several novelties in the design of our Brown clustering features:", "For each feature which is token-based, we add a feature containing the paths computed for the current token. Thus, taking into account our baseline system, we will add the following Brown clustering features:", "Brown Token: existing paths of length 4, 6, 10 and 20 for the current token.", "Brown Token Shape: existing paths of length 4, 6, 10, 20 for the current token and current token shape.", "Brown Bigram: existing paths of length 4, 6, 10, 20 for bigrams including the current token.", "Brown clustering features benefit from two additional features:", "Previous prediction plus token: the previous prediction (pd) for the current token and the current token.", "Previous two predictions: the previous prediction for the current and the previous token.", "For space reasons, Table TABREF24 only shows the Brown Token (bt) and Brown Token Shape (c) features for paths of length 4 and 6. We use the publicly available tool implemented by BIBREF50 with default settings. The input consists of a corpus tokenized and segmented one sentence per line, without punctuation. Furthermore, we follow previous work and remove all sentences which consist of less than 90% lowercase characters BIBREF50 , BIBREF52 before inducing the Brown clusters.", " BIBREF29 presents a number of unsupervised algorithms, based on distributional and morphological information, for clustering words into classes from unlabeled text. The focus is on clustering infrequent words on a small numbers of clusters from comparatively small amounts of data. In particular, BIBREF29 presents an algorithm combining distributional information with morphological information of words “by composing the Ney-Essen clustering model with a model for the morphology within a Bayesian framework”. The objective is to bias the distributional information to put words that are morphologically similar in the same cluster. We use the code released by BIBREF29 off the shelf to induce Clark clusters using the Ney-Essen with morphological information method. The input of the algorithm is a sequence of lowercase tokens without punctuation, one token per line with sentence breaks.", "Our Clark clustering features are very simple: we perform a look-up of the current token in the clustering lexicon. If a match is found, we add as a feature the clustering class, or the lack of match if the token is not found (see Clark-a and Clark-b in Table TABREF24 ).", "Another family of language models that produces word representations are the neural language models. These approaches produce representation of words as continuous vectors BIBREF53 , BIBREF54 , also called word embeddings. Nowadays, perhaps the most popular among them is the Skip-gram algorithm BIBREF30 . The Skip-gram algorithm uses shallow log-linear models to compute vector representation of words which are more efficient than previous word representations induced on neural language models. Their objective is to produce word embeddings by computing the probability of each n-gram as the product of the conditional probabilities of each context word in the n-gram conditioned on its central word BIBREF30 .", "Instead of using continuous vectors as real numbers, we induce clusters or word classes from the word vectors by applying K-means clustering. In this way we can use the cluster classes as simple binary features by injecting unigram match features. We use the Word2vec tool released by BIBREF30 with a 5 window context to train 50-dimensional word embeddings and to obtain the word clusters on top of them. The input of the algorithm is a corpus tokenized, lowercased, with punctuation removed and in one line. The Word2vec features are implemented exactly like the Clark features.", "We successfully combine clustering features from different word representations. Furthermore, we also stack or accumulate features of the same type of word representation induced from different data sources, trusting each clustering lexicon to a different degree, as shown by the five encoded clustering features in Table TABREF24 : two Clark and Word2vec features from different source data and one Brown feature. When using word representations as semi-supervised features for a task like NERC, two principal factors need to be taken into account: (i) the source data or corpus used to induce the word representations and (ii) the actual word representation used to encode our features which in turn modify the weight of our model's parameters in the training process.", "For the clustering features to be effective the induced clusters need to contain as many words appearing in the training, development and test sets as possible. This can be achieved by using corpora closely related to the text genre or domain of the data sets or by using very large unlabeled corpora which, although not closely domain-related, be large enough to include many relevant words. For example, with respect to the CoNLL 2003 English dataset an example of the former would be the Reuters corpus while the Wikipedia would be an example of the latter.", "The word representations obtained by different algorithms would capture different distributional properties of words in a given corpus or data source. Therefore, each type of clustering would allow us to capture different types of occurring named entity types. In other words, combining and stacking different types of clustering features induced over a variety of data sources should help to capture more similarities between different words in the training and test sets, increasing the contribution to the weights of the model parameters in the training process." ], [ "In this Section we report on the experiments performed with the ixa-pipe-nerc system as described in the previous section. The experiments are performed in 5 languages: Basque, Dutch, English, German and Spanish. For comparison purposes, in-domain results are presented in Section SECREF61 using the most common NERC datasets for each language as summarized in Table TABREF10 . Section SECREF75 analyzes the performance when reducing training data and Section SECREF79 presents eight out-of-domain evaluations for three languages: Dutch, English and Spanish.", "The results for Dutch, English and Spanish do not include trigrams and character n-grams in the local featureset described in Section SECREF25 , except for the models in each in-domain evaluation which are marked with “charngram 1:6”.", "We also experiment with dictionary features but, in contrast to previous approaches such as BIBREF49 , we only use currently available gazetteers off-the-shelf. For every model marked with “dict” we use the thirty English Illinois NER gazetteers BIBREF31 , irrespective of the target language. Additionally, the English models use six gazetteers about the Global Automotive Industry provided by LexisNexis to the Newsreader project, whereas the German models include, in addition to the Illinois gazetteers, the German dictionaries distributed in the CoNLL 2003 shared task. The gazetteers are collapsed into one large dictionary and deployed as described in Section SECREF35 .", "Finally, the clustering features are obtained by processing the following clusters from publicly available corpora: (i) 1000 Brown clusters; (ii) Clark and Word2vec clusters in the 100-600 range. To choose the best combination of clustering features we test the available permutations of Clark and Word2vec clusters with and without the Brown clusters on the development data. Table TABREF58 provides details of every corpus used to induce the clusters. For example, the first row reads: “Reuters RCV1 was used; the original 63 million words were reduced to 35 million after pre-processing for inducing Brown clusters. Clark and Word2vec clusters were trained on the whole corpus”. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF38 .", "Every evaluation is carried out using the CoNLL NER evaluation script. The results are obtained with the BILOU encoding for every experimental setting except for German CoNLL 2003." ], [ "In this section the results are presented by language. In two cases, Dutch and German, we use two different datasets, making it a total of seven in-domain evaluations.", "We tested our system in the highly competitive CoNLL 2003 dataset. Table TABREF63 shows that three of our models outperform previous best results reported for English in the CoNLL 2003 dataset BIBREF49 . Note that the best F1 score (91.36) is obtained by adding trigrams and character n-gram features to the best model (91.18). The results also show that these models improve the baseline provided by the local features by around 7 points in F1 score. The most significant gain is in terms of recall, almost 9 points better than the baseline.", "We also report very competitive results, only marginally lower than BIBREF49 , based on the stacking and combination of clustering features as described in Section UID57 . Thus, both best cluster and comp models, based on local plus clustering features only, outperform very competitive and more complex systems such as those of BIBREF31 and BIBREF52 , and obtain only marginally lower results than BIBREF49 . The stacking and combining effect manifests itself very clearly when we compare the single clustering feature models (BR, CW600, W2VG200 and W2VW400) with the light, comp and best cluster models which improve the overall F1 score by 1.30, 1.72 and 1.85 respectively over the best single clustering model (CW600).", "It is worth mentioning that our models do not score best in the development data. As the development data is closer in style and genre to the training data BIBREF31 , this may suggest that our system generalizes better on test data that is not close to the training data; indeed, the results reported in Section SECREF79 seem to confirm this hypothesis.", "We also compared our results with respect to the best two publicly available English NER systems trained on the same data. We downloaded the Stanford NER system distributed in the 2015-01-30 package. We evaluated their CoNLL model and, while the result is substantially better than their reference paper BIBREF32 , our clustering models obtain better results. The Illinois NER tagger is used by BIBREF31 and BIBREF52 , both of which are outperformed by our system.", "We tested our system in the GermEval 2014 dataset. Table TABREF65 compares our results with the best two systems (ExB and UKP) by means of the M3 metric, which separately analyzes the performance in terms of the outer and inner named entity spans. Table TABREF65 makes explicit the significant improvements achieved by the clustering features on top of the baseline system, particularly in terms of recall (almost 11 points in the outer level). The official results of our best configuration (de-cluster-dict) are reported in Table TABREF66 showing that our system marginally improves the best systems' results on that task (ExB and UKP).", "We also compare our system, in the last three rows, with the publicly available GermaNER BIBREF26 , which reports results for the 4 main outer level entity types (person, location, organization and other). For this experiment we trained the de-cluster and de-cluster + dict models on the four main classes, improving GermaNER's results by almost 3 F1 points. The GermaNER method of evaluation is interesting because allows researchers to directly compare their systems with a publicly available system trained on GermEval data.", "Table TABREF67 compares our German CoNLL 2003 results with the best previous work trained on public data. Our best CoNLL 2003 model obtains results similar to the state of the art performance with respect to the best system published up to date BIBREF24 using public data.", " BIBREF24 also report 78.20 F1 with a model trained with Clark clusters induced using the Huge German Corpus (HGC). Unfortunately, the corpus or the induced clusters were not available.", "The best system up to date on the CoNLL 2002 dataset, originally published by BIBREF47 , is distributed as part of the Freeling library BIBREF34 . Table TABREF69 lists four models that improve over their reported results, almost by 3 points in F1 measure in the case of the es-cluster model (with our without trigram and character n-gram features).", "Despite using clusters from one data source only (see Table TABREF58 ), results in Table TABREF71 show that our nl-cluster model outperforms the best result published on CoNLL 2002 BIBREF45 by 3.83 points in F1 score. Adding the English Illinois NER gazetteers BIBREF31 and trigram and character n-gram features increases the score to 85.04 F1, 5.41 points better than previous published work on this dataset.", "We also compared our system with the more recently developed SONAR-1 corpus and the companion NERD system distributed inside its release BIBREF33 . They report 84.91 F1 for the six main named entity types via 10-fold cross validation. For this comparison we chose the local, nl-cluster and nl-cluster-dict configurations from Table TABREF71 and run them on SONAR-1 using the same settings. The results reported in Table TABREF72 shows our system's improvement over previous results on this dataset.", "Table TABREF74 reports on the experiments using the Egunkaria NER dataset provided by BIBREF23 . Due to the sparsity of the MISC class mentioned in Section SECREF9 , we decided to train our models on three classes only (location, organization and person). Thus, the results are obtained training our models in the customary manner and evaluating on 3 classes. However, for direct comparison with previous work BIBREF23 , we also evaluate our best eu-cluster model (trained on 3 classes) on 4 classes.", "The results show that our eu-cluster model clearly improves upon previous work by 4 points in F1 measure (75.40 vs 71.35). These results are particularly interesting as it had been so far assumed that complex linguistic features and language-specific rules were required to perform well for agglutinative languages such as Basque BIBREF23 . Finally, it is worth noting that the eu-cluster model increases the overall F1 score by 11.72 over the baseline, of which 10 points are gained in precision and 13 in terms of recall." ], [ "So far, we have seen how, given a fixed amount of supervised training data, leveraging unlabeled data using multiple cluster sources helped to obtain state of the art results in seven different in-domain settings for five languages. In this section we will investigate to what extent our system allows to reduce the dependency on supervised training data.", "We first use the English CoNLL 2003 dataset for this experiment. The training set consists of around 204K words and we use various smaller versions of it to test the performance of our best cluster model reported in Table TABREF63 . Table TABREF76 displays the F1 results of the baseline system consisting of local features and the best cluster model. The INLINEFORM0 column refers to the gains of our best cluster model with respect to the baseline model for every portion of the training set.", "While we have already commented the substantial gains obtained simply by adding our clustering features, it is also interesting to note that the gains are much substantial when less supervised training data is available. Furthermore, it is striking that training our clustering features using only one eight of the training data (30K words) allows to obtain similar performance to the baseline system trained on the full training set. Equally interesting is the fact that cutting by half the training data only marginally harms the overall performance. Finally, training on just a quarter of the training set (60K) results in a very competitive model when compared with other publicly available NER systems for English trained on the full training set: it roughly matches Stanford NER's performance, it outperforms models using external knowledge or non-local features reported by BIBREF31 , and also several models reported by BIBREF52 , which use one type of word representations on top of the baseline system.", "We have also re-trained the Illinois NER system BIBREF31 and our best CoNLL 2003 model (en-91-18) for comparison. First, we can observe that for every portion of the training set, both our best cluster and en-91-18 model outperform the Illinois NER system. The best cluster results are noteworthy because, as opposed to Illinois NER, it does not use gazetteers or global features for extra performance.", "These results are mirrored by those obtained for the rest of the languages and datasets. Thus, Table TABREF77 displays, for each language, the F1 results of the baseline system and of the best cluster models on top of the baseline. Overall, it confirms that our cluster-based models obtain state of the art results using just one half of the data. Furthermore, using just one quarter of the training data we are able to match results of other publicly available systems for every language, outperforming in some cases, such as Basque, much complex systems of classifiers exploiting linguistic specific rules and features (POS tags, lemmas, semantic information from WordNet, etc.). Considering that Basque is a low-resourced language, it is particularly relevant to be able to reduce as much as possible the amount of gold supervised data required to develop a competitive NERC system." ], [ "NERC systems are often used in out-of-domain settings, namely, to annotate data that greatly differs from the data from which the NERC models were learned. These differences can be of text genre and/or domain, but also because the assumptions of what constitutes a named entity might differ. It is therefore interesting to develop robust NERC systems across both domains and datasets. In this section we demonstrate that our approach, consisting of basic, general local features and the combination and stacking of clusters, produces robust NERC systems in three out-of-domain evaluation settings:", "Class disagreements: Named entities are assigned to different classes in training and test.", "Different text genre: The text genre of training and test data differs.", "Annotation guidelines: The gold annotation of the test data follows different guidelines from the training data. This is usually reflected in different named entity spans.", "The datasets and languages chosen for these experiments are based on the availability of both previous results and publicly distributed NERC systems to facilitate direct comparison of our system with other approaches. Table TABREF83 specifies the datasets used for each out-of-domain setting and language. Details of each dataset can be found Table TABREF10 .", "MUC 7 annotates seven entity types, including four that are not included in CoNLL data: DATE, MONEY, NUMBER and TIME entities. Furthermore, CoNLL includes the MISC class, which was absent in MUC 7. This means that there are class disagreements in the gold standard annotation between the training and test datasets. In addition to the four CoNLL classes, SONAR-1 includes PRODUCT and EVENT whereas Ancora also annotates DATE and NUMBER. For example, consider the following sentence of the MUC 7 gold standard (example taken from BIBREF31 ):", "“...baloon, called the Virgin Global Challenger.”", "The gold annotation in MUC 7 establishes that there is one named entity:", "“...baloon, called [ORG Virgin] Global Challenger.”", "However, according to CoNLL 2003 guidelines, the entire name should be annotated like MISC:", "“...baloon, called [MISC Virgin Global Challenger].”", "In this setting some adjustments are made to the NERC systems' output. Following previous work BIBREF31 , every named entity that is not LOC, ORG, PER or MISC is labeled as `O'. Additionally for MUC 7 every MISC named entity is changed to `O'. For English we used the models reported in Section UID62 . For Spanish and Dutch we trained our system with the Ancora and SONAR-1 corpora using the configurations described in Sections UID68 and UID70 respectively. Table TABREF85 compares our results with previous approaches: using MUC 7, BIBREF52 provide standard phrase results whereas BIBREF31 score token based F1 results, namely, each token is considered a chunk, instead of considering multi-token spans too. For Spanish we use the Stanford NER Spanish model (2015-01-30 version) trained with Ancora. For Dutch we compare our SONAR-1 system with the companion system distributed with the SONAR-1 corpus BIBREF33 . The results are summarized in Table TABREF85 .", "In this setting the out-of-domain character is given by the differences in text genre between the English CoNLL 2003 set and the Wikigold corpus. We compare our system with English models trained on large amounts of silver-standard text (3.5M tokens) automatically created from the Wikipedia BIBREF27 . They report results on Wikigold showing that they outperformed their own CoNLL 2003 gold-standard model by 10 points in F1 score. We compare their result with our best cluster model in Table TABREF87 . While the results of our baseline model confirms theirs, our clustering model score is slightly higher. This result is interesting because it is arguably more simple to induce the clusters we use to train ixa-pipe-nerc rather than create the silver standard training set from Wikipedia as described in BIBREF27 .", "In this section the objective is studying not so much the differences in textual genre as the influence of substantially different annotation standards. We only use three classes (location, organization and person) to evaluate the best models presented for in-domain evaluations labeling `O' every entity which is not LOC, ORG or PER.", "The text genre of MEANTIME is not that different from CoNLL data. However, differences in the gold standard annotation result in significant disagreements regarding the span of the named entities BIBREF59 . For example, the following issues are markedly different with respect to the training data we use for each language:", "Different criteria to decide when a named entity is annotated: in the expression “40 billion US air tanker contract” the MEANTIME gold standard does not mark `US' as location, whereas in the training data this is systematically annotated.", "Mentions including the definite article within the name entity span: `the United States' versus `United States'.", "Longer extents containing common nouns: in the MEANTIME corpus there are many entities such as “United States airframer Boeing”, which in this case is considered an organization, whereas in the training data this span will in general consists of two entities: `United States' as location and `Boeing' as organization.", "Common nouns modifying the proper name: `Spokeswoman Sandy Angers' is annotated as a named entity of type PER whereas in the training data used the span of the named entity would usually be `Sandy Angers'.", "CoNLL NER phrase based evaluation punishes any bracketing error as both false positive and negative. Thus, these span-related disagreements make this setting extremely hard for models trained according to other annotation guidelines, as shown by Table TABREF93 . Our baseline models degrade around 40 F1 points and the cluster-based models around 35. Other systems' results worsen much more, especially for Spanish and Dutch. The token-based scores are in general better but the proportion in performance between systems across languages is similar.", "As an additional experiment, we also tested the English model recommended by Stanford NER which is trained for three classes (LOC, PER, ORG) using a variety of public and (not identified) private corpora (referred to as Stanford NER 3 class (ALL) in Table TABREF94 ). The results with respect to their CoNLL model improved by around 3 points in F1 score across named entity labels and evaluation types (phrase or token based). In view of these results, we experimented with multi-corpora training data added to our best CoNLL 2003 model (en-91-18). Thus, we trained using three public training sets: MUC 7, CoNLL 2003 and Ontonotes 4.0. The local model with the three training sets (Local ALL) improved 12 and 17 points in F1 score across evaluations and entity types, outperforming our best model trained only with CoNLL 2003. Adding the clustering features gained between 2 and 5 points more surpassing the Stanford NER 3 class multi-corpora model in every evaluation. We believe that the main reason to explain these improvements is the variety and quantity of annotations provided by Ontonotes (1M word corpus), and to a lesser extent by MUC 7, which includes some spans containing common nouns and determiners making the model slightly more robust regarding the mention spans." ], [ "Despite the simplicity of the ixa-pipe-nerc approach, we report best results for English in 4 different datasets: for CoNLL 2003 and for the three English out-of-domain evaluations. For German we improve the results of the best system in the GermEval 2014 task and obtain comparable results to previous work in the CoNLL 2003 dataset using publicly available data. In Spanish we provide results on CoNLL 2002 and in two out-of-domain evaluations clearly outperforming previous best results. For Dutch we improve over previous results in CoNLL 2002 and SONAR-1 data and two out-of-domain evaluations. Finally, for Basque (Egunkaria) the improvements are considerable." ], [ "We have shown how to develop robust NERC systems across languages and datasets with minimal human intervention, even for languages with inflected named entities. This is based on adequately combining word representation features on top of shallow and general local features. Crucially, we have empirically demonstrate how to effectively combine various types of simple word representation features depending on the source data available. This has resulted in a clear methodology for using the three types of clustering features which produces very competitive results in both in-domain and out-of-domain settings.", "Thus, despite the relative simplicity of our approach, we report state of the art results for Dutch, English, German, Spanish and Basque in seven in-domain evaluations.", "We also outperform previous work in eight out-of-domain evaluations, showing that our clustering features improve the robustness of NERC systems across datasets. Finally, we have measured how much our system's performance degrades when the amount of supervised data is drastically cut. The results show our models are still very competitive even when reducing the supervised data by half or more. This, together with the lack of linguistic features, facilitates the easy and fast development of NERC systems for new domains or languages.", "In future work we would like to explore more the various types of domain adaptation required for robust performance across text genres and domains, perhaps including micro-blog and noisy text such as tweets. Furthermore, we are also planning to adapt our techniques to other sequence labeling problems such as Opinion Target Extraction BIBREF13 , BIBREF14 and Super Sense tagging BIBREF60 ." ], [ "We would like to thank the anonymous reviewers for their comments to improve this paper. We would also like to thank Sebastian Padó for his help training the Clark clusters. This work has been supported by the European projects NewsReader, EC/FP7/316404 and QTLeap - EC/FP7/610516, and by the Spanish Ministry for Science and Innovation (MICINN) SKATER, Grant No. TIN2012-38584-C06-01 and TUNER, TIN2015-65308-C5-1-R." ] ], "section_name": [ "Introduction", "Contributions", "Related Work", "Datasets", "Related Approaches", "System Description", "Local Features", "Gazetteers", "Clustering Features", "Experimental Results", "In-domain evaluation", "Reducing training data", "Out-of-domain evaluations", "Discussion", "Conclusion and Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "352a13bc8abd0ed6638e3f67c48d2b8d2adbdeac" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 5: CoNLL 2003 English results." ], "extractive_spans": [], "free_form_answer": "Precision, Recall, F1", "highlighted_evidence": [ "FLOAT SELECTED: Table 5: CoNLL 2003 English results." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "937be2bde16019bb172f8530084e0a0d26ea3a5b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Datasets used for training, development and evaluation. MUC7: only three classes (LOC, ORG, PER) of the formal run are used for out-of-domain evaluation. As there are not standard partitions of SONAR-1 and Ancora 2.0, the full corpus was used for training and later evaluated in-out-of-domain settings." ], "extractive_spans": [], "free_form_answer": "CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Datasets used for training, development and evaluation. MUC7: only three classes (LOC, ORG, PER) of the formal run are used for out-of-domain evaluation. As there are not standard partitions of SONAR-1 and Ancora 2.0, the full corpus was used for training and later evaluated in-out-of-domain settings." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "888a6331029a6d0d43f968bd3351c92471e23819" ], "answer": [ { "evidence": [ "Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. Our NERC system is publicly available and distributed under the Apache 2.0 License and part of the IXA pipes tools BIBREF38 . Every result reported in this paper is obtained using the conlleval script from the CoNLL 2002 and CoNLL 2003 shared tasks. To guarantee reproducibility of results we also make publicly available the models and the scripts used to perform the evaluations. The system, models and evaluation scripts can be found in the ixa-pipe-nerc website.", "The local features constitute our baseline system on top of which the clustering features are added. We implement the following feature set, partially inspired by previous work BIBREF46 :" ], "extractive_spans": [], "free_form_answer": "Perceptron model using the local features.", "highlighted_evidence": [ "Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. ", "The local features constitute our baseline system on top of which the clustering features are added." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "what are the evaluation metrics?", "which datasets were used in evaluation?", "what are the baselines?" ], "question_id": [ "20ec88c45c1d633adfd7bff7bbf3336d01fb6f37", "a4fe5d182ddee24e5bbf222d6d6996b3925060c8", "f463db61de40ae86cf5ddd445783bb34f5f8ab67" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Table 1: Datasets used for training, development and evaluation. MUC7: only three classes (LOC, ORG, PER) of the formal run are used for out-of-domain evaluation. As there are not standard partitions of SONAR-1 and Ancora 2.0, the full corpus was used for training and later evaluated in-out-of-domain settings.", "Table 2: Features of best previous in-domain results. Local: shallow local features including capitalization, word shape, etc.; Ling: linguistic features such as POS, lemma, chunks and semantic information from Wordnet; Global: global features; Gaz: gazetteers; WR: word representation features; Rules: manually encoded rules; Ensemble: stack of classifiers or ensemble system; Public: if the system is publicly distributed. Res: If any external resources used are publicly distributed to allow re-training.", "Table 3: Features generated for the Basque sentence “Morras munduko txapeldun izan zen juniorretan 1994an, Ekuadorko hiriburuan, Quiton”. English: Morras was junior world champion in 1994, in the capital of Ecuador, Quito. Current token is ‘Ekuadorko’.", "Figure 1: A Brown clustering hierarchy.", "Table 4: Unlabeled corpora used to induced clusters. For each corpus and cluster type the number of words (in millions) is specified. Average training times: depending on the number of words, Brown clusters training time required between 5h and 48h. Word2vec required 1-4 hours whereas Clark clusters training lasted between 5 hours and 10 days.", "Table 5: CoNLL 2003 English results.", "Table 6: GermEval 2014 M3 metric results and comparison to GermaNER system on the outer spans.", "Table 7: GermEval 2014 Official results.", "Table 8: CoNLL 2003 German results.", "Table 9: CoNLL 2002 Spanish results.", "Table 10: CoNLL 2002 Dutch results.", "Table 11: SONAR-1 10-fold cross validation results.", "Table 12: Basque Egunkaria results.", "Table 13: CoNLL 2003 English results reducing training data.", "Table 14: Multilingual results reducing training data. Datasets employed: Basque (egunkaria), Dutch and Spanish (CoNLL 2002) and German (GermEval 2014 outer). L: Local model. C: cluster model. ∆: difference between them.", "Table 15: Testsets and languages for out-of-domain evaluations.", "Table 16: Out-of-domain evaluation based on class disagreements. English models trained on CoNLL 2003; Spanish models trained with Ancora; Dutch models trained with SONAR-1. T-F1: token-based F1.", "Table 17: Wikigold out-of-domain evaluation based on text genre.", "Table 18: MEANTIME out-of-domain evaluation. English systems trained on CoNLL data. Dutch systems trained with SONAR-1. Stanford NER Spanish model is trained with Ancora (20150130 version) whereas ixa-pipe-nerc is trained with CoNLL data. T-F1: token-based F1. Local : baseline system; best-clusters : nl-clusters, es-cluster and en-best-cluster; best-overall : best configuration previously presented for each language for the in-domain evaluations.", "Table 19: MEANTIME English multi-corpus out-of-domain evaluation." ], "file": [ "5-Table1-1.png", "6-Table2-1.png", "9-Table3-1.png", "11-Figure1-1.png", "13-Table4-1.png", "15-Table5-1.png", "15-Table6-1.png", "16-Table7-1.png", "16-Table8-1.png", "16-Table9-1.png", "17-Table10-1.png", "17-Table11-1.png", "18-Table12-1.png", "18-Table13-1.png", "19-Table14-1.png", "19-Table15-1.png", "20-Table16-1.png", "21-Table17-1.png", "22-Table18-1.png", "22-Table19-1.png" ] }
[ "what are the evaluation metrics?", "which datasets were used in evaluation?", "what are the baselines?" ]
[ [ "1701.09123-15-Table5-1.png" ], [ "1701.09123-5-Table1-1.png" ], [ "1701.09123-Contributions-7", "1701.09123-Local Features-0" ] ]
[ "Precision, Recall, F1", "CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0", "Perceptron model using the local features." ]
452
2002.02427
Irony Detection in a Multilingual Context
This paper proposes the first multilingual (French, English and Arabic) and multicultural (Indo-European languages vs. less culturally close languages) irony detection system. We employ both feature-based models and neural architectures using monolingual word representation. We compare the performance of these systems with state-of-the-art systems to identify their capabilities. We show that these monolingual models trained separately on different languages using multilingual word representation or text-based features can open the door to irony detection in languages that lack of annotated data for irony.
{ "paragraphs": [ [ "Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.", "Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective.", "In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony.", "We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:", "A new freely available corpus of Arabic tweets manually annotated for irony detection.", "Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.", "Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony." ], [ "Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.", "To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen's Kappa was $0.76$, while the agreement score between the annotators' labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.", "French dataset (Fr=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony BIBREF3 which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of $0.69$.", "English dataset (En=$11,225$ tweets). We use the corpus built by BIBREF15 which consists of $100,000$ tweets collected using the hashtag #sarcasm. It was used as benchmark in several works BIBREF27, BIBREF28. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages' datasets.", "Table TABREF6 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section SECREF4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.", "", "" ], [ "It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).", "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.", "Results. Table TABREF9 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of BIBREF15, BIBREF33, as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ BIBREF3). They outperform those obtained for Arabic ($A=71.7$) BIBREF13 and are comparable to those recently reported in the irony detection shared task in Arabic tweets BIBREF14, BIBREF34 ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.", "", "" ], [ "We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.", "", "", "From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets." ], [ "This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president \"Sisi\" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian." ], [ "The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31)." ] ], "section_name": [ "Motivations", "Data", "Monolingual Irony Detection", "Cross-lingual Irony Detection", "Discussions and Conclusion", "Acknowledgment" ] }
{ "answers": [ { "annotation_id": [ "d5c6be3c5eb7c1ad1fae65b8efc3bcf9689a35a5" ], "answer": [ { "evidence": [ "We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results." ], "extractive_spans": [ " a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space" ], "free_form_answer": "", "highlighted_evidence": [ "ch", "As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "353a4bd1e727b871373244835cb2f4cf6ff2d7b1" ], "answer": [ { "evidence": [ "From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ " Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "69195d58804a6eae277d4693f49a701bab02049e" ], "answer": [ { "evidence": [ "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library." ], "extractive_spans": [ "Convolutional Neural Network (CNN)" ], "free_form_answer": "", "highlighted_evidence": [ " We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "f673ff4bca3443bbf3cb65090f4e3738f724ffcc" ], "answer": [ { "evidence": [ "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library." ], "extractive_spans": [ "language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities)", " language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words)" ], "free_form_answer": "", "highlighted_evidence": [ " We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "7f045f5e93897f2d2324376a5f0f64bf95d9a457" ], "answer": [ { "evidence": [ "Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library." ], "extractive_spans": [], "free_form_answer": "AraVec for Arabic, FastText for French, and Word2vec Google News for English.", "highlighted_evidence": [ " For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "What multilingual word representations are used?", "Do the authors identify any cultural differences in irony use?", "What neural architectures are used?", "What text-based features are used?", "What monolingual word representations are used?" ], "question_id": [ "3d7ab856a5cade7ab374fc2f2713a4d0a30bbd56", "212977344f4bf2ae8f060bdac0317db2d1801724", "0c29d08f766b06ceb2421aa402e71a2d65a5a381", "c9ee70c481c801892556eb6b9fd8ee38197923be", "a24a7a460fd5e60d71a7e787401c68caa4702df6" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "irony", "irony", "irony", "irony", "irony" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1. Tweet distribution in all corpora.", "Table 2. Results of the monolingual experiments (in percentage) in terms of accuracy (A), precision (P), recall (R), and macro F-score (F).", "Table 3. Results of the cross-lingual experiments." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png" ] }
[ "What monolingual word representations are used?" ]
[ [ "2002.02427-Monolingual Irony Detection-1" ] ]
[ "AraVec for Arabic, FastText for French, and Word2vec Google News for English." ]
453
1807.09671
A Novel ILP Framework for Summarizing Content with High Lexical Variety
Summarizing content contributed by individuals can be challenging, because people make different lexical choices even when describing the same events. However, there remains a significant need to summarize such content. Examples include the student responses to post-class reflective questions, product reviews, and news articles published by different news agencies related to the same events. High lexical diversity of these documents hinders the system's ability to effectively identify salient content and reduce summary redundancy. In this paper, we overcome this issue by introducing an integer linear programming-based summarization framework. It incorporates a low-rank approximation to the sentence-word co-occurrence matrix to intrinsically group semantically-similar lexical items. We conduct extensive experiments on datasets of student responses, product reviews, and news documents. Our approach compares favorably to a number of extractive baselines as well as a neural abstractive summarization system. The paper finally sheds light on when and why the proposed framework is effective at summarizing content with high lexical variety.
{ "paragraphs": [ [ "Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others.", "Summarizing content contributed by multiple authors is particularly challenging. This is partly because people tend to use different expressions to convey the same semantic meaning. In a recent study of summarizing student responses to post-class reflective questions, Luo et al., Luo:2016:NAACL observe that the students use distinct lexical items such as “bike elements” and “bicycle parts” to refer to the same concept. The student responses frequently contain expressions with little or no word overlap, such as “the main topics of this course” and “what we will learn in this class,” when they are prompted with “describe what you found most interesting in today's class.” A similar phenomenon has also been observed in the news domain, where reporters use different nicknames, e.g., “Bronx Zoo” and “New York Highlanders,” to refer to the baseball team “New York Yankees.” Luo et al., Luo:2016:NAACL report that about 80% of the document bigrams occur only once or twice for the news domain, whereas the ratio is 97% for student responses, suggesting the latter domain has a higher level of lexical diversity. When source documents contain diverse expressions conveying the same meaning, it can hinder the summarization system's ability to effectively identify salient content from the source documents. It can also increase the summary redundancy if lexically-distinct but semantically-similar expressions are included in the summary.", "Existing neural encoder-decoder models may not work well at summarizing such content with high lexical variety BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . On one hand, training the neural sequence-to-sequence models requires a large amount of parallel data. The cost of annotating gold-standard summaries for many domains such as student responses can be prohibitive. Without sufficient labelled data, the models can only be trained on automatically gathered corpora, where an instance often includes a news article paired with its title or a few highlights. On the other hand, the summaries produced by existing neural encoder-decoder models are far from perfect. The summaries are mostly extractive with minor edits BIBREF16 , contain repetitive words and phrases BIBREF17 and may not accurately reproduce factual details BIBREF18 , BIBREF19 . We examine the performance of a state-of-the-art neural summarization model in Section § SECREF28 .", "In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts\" will be allowed to partially contain “bike elements\" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings.", "Our research contributions of this work include the following.", "In the following sections we first present a thorough review of the related work (§ SECREF2 ), then introduce our ILP summarization framework (§ SECREF3 ) with a low-rank approximation of the co-occurrence matrix optimized using the proximal gradient method (§ SECREF4 ). Experiments are performed on a collection of eight datasets (§ SECREF5 ) containing student responses to post-class reflective questions, product reviews, peer reviews, and news articles. Intrinsic evaluation (§ SECREF20 ) shows that the low-rank approximation algorithm can effectively group distinct expressions used in similar semantic context. For extrinsic evaluation (§ SECREF28 ) our proposed framework obtains competitive results in comparison to state-of-the-art summarization systems. Finally, we conduct comprehensive studies analyzing the characteristics of the datasets and suggest critical factors that affect the summarization performance (§ SECREF7 )." ], [ "Extractive summarization has undergone great development over the past decades. It focuses on extracting relevant sentences from a single document or a cluster of documents related to a particular topic. Various techniques have been explored, including maximal marginal relevance BIBREF22 , submodularity BIBREF23 , integer linear programming BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF4 , minimizing reconstruction error BIBREF28 , graph-based models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , determinantal point processes BIBREF33 , neural networks and reinforcement learning BIBREF34 , BIBREF35 among others. Nonetheless, most studies are bound to a single dataset and few approaches have been evaluated in a cross-domain setting. In this paper, we propose an enhanced ILP framework and evaluate it on a broad range of datasets. We present an in-depth analysis of the dataset characteristics derived from both source documents and reference summaries to understand how domain-specific factors may affect the applicability of the proposed approach.", "Neural summarization has seen promising improvements in recent years with encoder-decoder models BIBREF13 , BIBREF14 . The encoder condenses the source text to a dense vector, whereas the decoder unrolls the vector to a summary sequence by predicting one word at a time. A number of studies have been proposed to deal with out-of-vocabulary words BIBREF16 , improve the attention mechanism BIBREF36 , BIBREF37 , BIBREF38 , avoid generating repetitive words BIBREF16 , BIBREF17 , adjust summary length BIBREF39 , encode long text BIBREF40 , BIBREF41 and improve the training objective BIBREF42 , BIBREF15 , BIBREF43 . To date, these studies focus primarily on single-document summarization and headline generation. This is partly because training neural encoder-decoder models requires a large amount of parallel data, yet the cost of annotating gold-standard summaries for most domains can be prohibitive. We validate the effectiveness of a state-of-the-art neural summarization system BIBREF16 on our collection of datasets and report results in § SECREF28 .", "In this paper we focus on the integer linear programming-based summarization framework and propose enhancements to it to summarize text content with high lexical diversity. The ILP framework is shown to perform strongly on extractive summarization BIBREF20 , BIBREF44 , BIBREF21 . It produces an optimal selection of sentences that (i) maximize the coverage of important concepts discussed in the source, (ii) minimize the redundancy in pairs of selected sentences, and (iii) ensure the summary length does not exceed a limit. Previous work has largely focused on improving the estimation of concept weights in the ILP framework BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF4 . However, distinct lexical items such as “bike elements” and “bicycle parts” are treated as different concepts and their weights are not shared. In this paper we overcome this issue by proposing a low-rank approximation to the sentence-concept co-occurrence matrix to intrinsically group lexically-distinct but semantically-similar expressions; they are considered as a whole when maximizing concept coverage and minimizing redundancy.", "Our work is also different from the traditional approaches using dimensionality reduction techniques such as non-negative matrix factorization (NNMF) and latent semantic analysis (LSA) for summarization BIBREF49 , BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . In particular, Wang et al. wang2008multi use NNMF to group sentences into clusters; Conroy et al. conroy-EtAl:2013:MultiLing explore NNMF and LSA to obtain better estimates of term weights; Wang et al. wang2016low use low-rank approximation to cast sentences and images to the same embedding space. Different from the above methods, our proposed framework focuses on obtaining a low-rank approximation of the co-occurrence matrix embedded in the ILP framework, so that diverse expressions can share co-occurrence frequencies. Note that out-of-vocabulary expressions and domain-specific terminologies are abundant in our datasets, therefore simply calculating the lexical overlap BIBREF54 or cosine similarity of word embeddings BIBREF55 cannot serve our goal well.", "This manuscript extends our previous work on summarizing student course responses BIBREF11 , BIBREF56 , BIBREF12 submitted after each lecture via a mobile app named CourseMIRROR BIBREF57 , BIBREF58 , BIBREF59 . The students are asked to respond to reflective prompts such as “describe what you found most interesting in today's class” and “describe what was confusing or needed more detail.” For large classes with hundreds of students, it can be quite difficult for instructors to manually analyze the student responses, hence the help of automatic summarization. Our extensions of this work are along three dimensions: (i) we crack the “black-box” of the low-rank approximation algorithm to understand if it indeed allows lexically-diverse but semantically-similar items to share co-occurrence statistics; (ii) we compare the ILP-based summarization framework with state-of-the-art baselines, including a popular neural encoder-decoder model for summarization; (iii) we expand the student feedback datasets to include responses collected from materials science and engineering, statistics for industrial engineers, and data structures. We additionally experiment with reviews and news articles. Analyzing the unique characteristics of each dataset allows us to identify crucial factors influencing the summarization performance.", "With the fast development of Massive Open Online Courses (MOOC) platforms, more attention is being dedicated to analyzing educationally-oriented language data. These studies seek to identify student leaders from MOOC discussion forums BIBREF60 , perform sentiment analysis on student discussions BIBREF61 , improve student engagement and reducing student retention BIBREF62 , BIBREF63 , and using language generation techniques to automatically generate feedback to students BIBREF64 . Our focus of this paper is to automatically summarizing student responses so that instructors can collect feedback in a timely manner. We expect the developed summarization techniques and result analysis will further summarization research in similar text genres exhibiting high lexical variety." ], [ "Let INLINEFORM0 be a set of documents that consist of INLINEFORM1 sentences in total. Let INLINEFORM2 , INLINEFORM3 indicate if a sentence INLINEFORM4 is selected ( INLINEFORM5 ) or not ( INLINEFORM6 ) in the summary. Similarly, let INLINEFORM7 be the number of unique concepts in INLINEFORM8 . INLINEFORM9 , INLINEFORM10 indicate the appearance of concepts in the summary. Each concept INLINEFORM11 is assigned a weight of INLINEFORM12 , often measured by the number of sentences or documents that contain the concept. The ILP-based summarization approach BIBREF20 searches for an optimal assignment to the sentence and concept variables so that the selected summary sentences maximize coverage of important concepts. The relationship between concepts and sentences is captured by a co-occurrence matrix INLINEFORM13 , where INLINEFORM14 indicates the INLINEFORM15 -th concept appears in the INLINEFORM16 -th sentence, and INLINEFORM17 otherwise. In the literature, bigrams are frequently used as a surrogate for concepts BIBREF24 , BIBREF21 . We follow the convention and use `concept' and `bigram' interchangeably in this paper.", "Two sets of linear constraints are specified to ensure the ILP validity: (1) a concept is selected if and only if at least one sentence carrying it has been selected (Eq. ), and (2) all concepts in a sentence will be selected if that sentence is selected (Eq. ). Finally, the selected summary sentences are allowed to contain a total of INLINEFORM0 words or less (Eq. ). DISPLAYFORM0 ", "The above ILP can be transformed to matrix representation: DISPLAYFORM0 ", "We use boldface letters to represent vectors and matrices. INLINEFORM0 is an auxiliary matrix created by horizontally stacking the concept vector INLINEFORM1 INLINEFORM2 times. Constraint set (Eq. ) specifies that a sentence is selected indicates that all concepts it carries have been selected. It corresponds to INLINEFORM3 constraints of the form INLINEFORM4 , where INLINEFORM5 .", "As far as we know, this is the first-of-its-kind matrix representation of the ILP framework. It clearly shows the two important components of this framework, including 1) the concept-sentence co-occurrence matrix INLINEFORM0 , and 2) concept weight vector INLINEFORM1 . Existing work focus mainly on generating better estimates of concept weights ( INLINEFORM2 ), while we focus on improving the co-occurrence matrix INLINEFORM3 ." ], [ "Because of the lexical diversity problem, we suspect the co-occurrence matrix INLINEFORM0 may not establish a faithful correspondence between sentences and concepts. A concept may be conveyed using multiple bigram expressions; however, the current co-occurrence matrix only captures a binary relationship between sentences and bigrams. For example, we ought to give partial credit to “bicycle parts” given that a similar expression “bike elements” appears in the sentence. Domain-specific synonyms may be captured as well. For example, the sentence “I tried to follow along but I couldn't grasp the concepts” is expected to partially contain the concept “understand the”, although the latter did not appear in the sentence.", "The existing matrix INLINEFORM0 is highly sparse. Only 3.7% of the entries are non-zero in the student response data sets on average (§ SECREF5 ). We therefore propose to impute the co-occurrence matrix by filling in missing values (i.e., matrix completion). This is accomplished by approximating the original co-occurrence matrix using a low-rank matrix. The low-rankness encourages similar concepts to be shared across sentences.", "The ILP with a low-rank approximation of the co-occurrence matrix can be formalized as follows. DISPLAYFORM0 ", "The low-rank approximation process makes two notable changes to the existing ILP framework.", "Concretely, given the co-occurrence matrix INLINEFORM0 , we aim to find a low-rank matrix INLINEFORM1 whose values are close to INLINEFORM2 at the observed positions. Our objective function is DISPLAYFORM0 ", "where INLINEFORM0 represents the set of observed value positions. INLINEFORM1 denotes the trace norm of INLINEFORM2 , i.e., INLINEFORM3 , where INLINEFORM4 is the rank of INLINEFORM5 and INLINEFORM6 are the singular values. By defining the following projection operator INLINEFORM7 , DISPLAYFORM0 ", "our objective function (Eq. EQREF10 ) can be succinctly represented as DISPLAYFORM0 ", "where INLINEFORM0 denotes the Frobenius norm.", "Following Mazumder et al. Mazumder:2010, we optimize Eq. EQREF12 using the proximal gradient descent algorithm. The update rule is DISPLAYFORM0 ", "", "where INLINEFORM0 is the step size at iteration k and the proximal function INLINEFORM1 is defined as the singular value soft-thresholding operator, INLINEFORM2 , where INLINEFORM3 is the singular value decomposition (SVD) of INLINEFORM4 and INLINEFORM5 .", "Since the gradient of INLINEFORM0 is Lipschitz continuous with INLINEFORM1 ( INLINEFORM2 is the Lipschitz continuous constant), we follow Mazumder et al. Mazumder:2010 to choose fixed step size INLINEFORM3 , which has a provable convergence rate of INLINEFORM4 , where INLINEFORM5 is the number of iterations." ], [ "To demonstrate the generality of the proposed approach, we consider three distinct types of corpora, ranging from student response data sets from four different courses to three sets of reviews to one benchmark of news articles. The corpora are summarized in Table TABREF14 .", "", "Student responses. Research has explored using reflection prompts/muddy cards/one-minute papers to promote and collect reflections from students BIBREF65 , BIBREF66 , BIBREF67 . However, it is expensive and time consuming for humans to summarize such feedback. It is therefore desirable to automatically summarize the student feedback produced in online and offline environments, although it is only recently that a data collection effort to support such research has been initiated BIBREF58 , BIBREF57 . In our data, one particular type of student response is considered, named “reflective feedback” BIBREF68 , which has been shown to enhance interaction between instructors and students by educational researchers BIBREF69 , BIBREF70 . More specifically, students are presented with the following prompts after each lecture and asked to provide responses: 1) “describe what you found most interesting in today's class,” 2) “describe what was confusing or needed more detail,” and 3) “describe what you learned about how you learn.” These open-ended prompts are carefully designed to encourage students to self-reflect, allowing them to “recapture experience, think about it and evaluate it\" BIBREF68 .", "To test generality, we gathered student responses from four different courses, as shown in Table TABREF14 . The first one was collected by Menekse et al. Menekse:2011 using paper-based surveys from an introductory materials science and engineering class (henceforth Eng) taught in a major U.S. university, and a subset is made public by us BIBREF11 , available at the link: http://www.coursemirror.com/download/dataset. The remaining three courses are collected by us using a mobile application, CourseMIRROR BIBREF57 , BIBREF58 and then the reference summaries for each course are created by human annotators with the proper background. The human annotators are allowed to create abstract summaries using their own words in addition to selecting phrases directly from the responses. While the 2nd and 3rd data sets are from the same course, Statistics for Industrial Engineers, they were taught in 2015 and 2016 respectively (henceforth Stat2015 and Stat2016), at the Boǧaziçi University in Turkey. The course was taught in English while the official language is Turkish. The last one is from a fundamental undergraduate Computer Science course (data structures) at a local U.S. university taught in 2016 (henceforth CS2016).", "Another reason we choose the student responses is that we have advanced annotation allowing us to perform an intrinsic evaluation to test whether the low-rank approximation does capture similar concepts or not. An example of the annotation is shown in Table TABREF15 , where phrases in the student responses that are semantically the same as the summary phrases are highlighted with the same color by human annotators. For example, “error bounding\" (S2), “error boundary\" (S4), “finding that error\" (S3), and “determining the critical value for error\" (S7) are semantically equivalent to “Error bounding\" in the human summary. Details of the intrinsic evaluation are introduced in SECREF20 .", "", "Product and peer reviews. The review data sets are provided by Xiong and Litman xiong-litman:2014:Coling, consisting of 3 categories. The first one is a subset of product reviews from a widely used data set in review opinion mining and sentiment analysis, contributed by Jindal and Liu jindal2008opinion. In particular, it randomly sampled 3 set of reviews from a representative product (digital camera), each with 18 reviews from an individual product type (e.g. “summarizing 18 camera reviews for Nikon D3200\"). The second one is movie reviews crawled from IMDB.com by the authors themselves. The third one is peer reviews collected in a college-level history class from an online peer-review reciprocal system, SWoRD BIBREF71 . The average number of sentences per review set is 85 for camera reviews, 328 for movie reviews and 80 for peer review; the average number of words per sentence in the camera, movie, and peer reviews are 23, 24 and 19, respectively. The human summaries were collected in the form of online surveys (one survey per domain) hosted by Qualtrics. Each human summary contains 10 sentences from users' reviews. Example movie reviews are shown in Table TABREF17 .", "News articles. Most summarization work focuses on news documents, as driven by the Document Understanding Conferences (DUC) and Text Analysis Conferences (TAC). For comparison, we select DUC 2004 to evaluate our approach (henceforth DUC04), which is widely used in the literature BIBREF72 , BIBREF73 , BIBREF74 , BIBREF75 , BIBREF76 . It consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents. The task is to create a short summary ( INLINEFORM0 665 bytes) of each cluster. Example news sentences are shown in Table TABREF19 ." ], [ "In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation." ], [ "When examining the imputed sentence-concept co-occurrence matrix, we notice some interesting examples that indicate the effectiveness of the proposed approach, shown in Table TABREF21 .", "We want to investigate whether the matrix completion (MC) helps to capture similar concepts (i.e., bigrams). Recall that, if a bigram INLINEFORM0 is similar to another bigram in a sentence INLINEFORM1 , the sentence INLINEFORM2 should assign a partial score to the bigram INLINEFORM3 after the low-rank approximation. For instance, “The activity with the bicycle parts\" should give a partial score to “bike elements\" since it is similar to “bicycle parts\". Note that, the co-occurrence matrix INLINEFORM4 measures whether a sentence includes a bigram or not. Without matrix completion, if a bigram INLINEFORM5 does not appear in a sentence INLINEFORM6 , INLINEFORM7 . After matrix completion, INLINEFORM8 ( INLINEFORM9 is the low-rank approximation matrix of INLINEFORM10 ) becomes a continuous number ranging from 0 to 1 (negative values are truncated). Therefore, INLINEFORM11 does not necessarily mean the sentence contains a similar bigram, since it might also give positive scores to non-similar bigrams. To solve this issue, we propose two different ways to test whether the matrix completion really helps to capture similar concepts.", "H1.a: A bigram receives a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. That is, if a bigram INLINEFORM0 is similar to one of bigrams in a sentence INLINEFORM1 , but not similar to any bigram in another sentence INLINEFORM2 , then after matrix completion, INLINEFORM3 .", "H1.b: A sentence gives higher partial scores to bigrams that are similar to its own bigrams than bigrams that are different from its own. That is, if a sentence INLINEFORM0 has a bigram that is similar to INLINEFORM1 , but none of its bigrams is similar to INLINEFORM2 , then, after matrix completion, INLINEFORM3 .", "In order to test these two hypotheses, we need to construct gold-standard pairs of similar bigrams and pairs of different bigrams, which can be automatically obtained with the phrase-highlighting data (Table TABREF15 ). We first extract a candidate bigram from a phrase if and only if a single bigram can be extracted from the phrase. In this way, we discard long phrases if there are multiple candidate bigrams among them in order to avoid ambiguity as we cannot validate which of them match another target bigram. A bigram is defined as two words and at least one of them is not a stop word. We then extract every pair of candidate bigrams that are highlighted in the same color as similar bigrams. Similarly, we extract every pair of candidate bigrams that are highlighted as different colors as different bigrams. For example, “bias reduction\" is a candidate phrase, which is similar to “bias correction\" since they are in the same color.", "To test H1.a, given a bigram INLINEFORM0 , a bigram INLINEFORM1 that is similar to it, and a bigram INLINEFORM2 that is different from it, we can select the bigram INLINEFORM3 , and the sentence INLINEFORM4 that contains INLINEFORM5 , and the sentence INLINEFORM6 that contains INLINEFORM7 . We ignore INLINEFORM8 if it contains any other bigram that is similar to INLINEFORM9 to eliminate the compounded case that both similar and different bigrams are within one sentence. Note, if there are multiple sentences containing INLINEFORM10 , we consider each of them. In this way, we construct a triple INLINEFORM11 , and test whether INLINEFORM12 . To test H1.b, for each pair of similar bigrams INLINEFORM13 , and different bigrams INLINEFORM14 , we select the sentence INLINEFORM15 that contains INLINEFORM16 so that we construct a triple INLINEFORM17 , and test whether INLINEFORM18 . We also filtered out INLINEFORM19 that contains similar bigram(s) to INLINEFORM20 to remove the compounded effect. In this way, we collected a gold-standard data set to test the two hypotheses above as shown in Table TABREF24 .", "The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing\" and “H1 and Ho conditions\" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts\" in Table TABREF25 is associated with “hard to\"." ], [ "Our proposed approach is compared against a range of baselines. They are 1) MEAD BIBREF30 , a centroid-based summarization system that scores sentences based on length, centroid, and position; 2) LexRank BIBREF29 , a graph-based summarization approach based on eigenvector centrality; 3) SumBasic BIBREF77 , an approach that assumes words occurring frequently in a document cluster have a higher chance of being included in the summary; 4) Pointer-Generator Networks (PGN) BIBREF16 , a state-of-the-art neural encoder-decoder approach for abstractive summarization. The system was trained on the CNN/Daily Mail data sets BIBREF78 , BIBREF14 . 5) ILP BIBREF21 , a baseline ILP framework without matrix completion.", "The Pointer-Generator Networks BIBREF16 describes a neural encoder-decoder architecture. It encourages the system to copy words from the source text via pointing, while retaining the ability to produce novel words through the generator. It also contains a coverage mechanism to keep track of what has been summarized, thus reducing word repetition. The pointer-generator networks have not been tested for summarizing content contributed by multiple authors. In this study we evaluate their performance on our collection of datasets.", "For the ILP-based approaches, we use bigrams as concepts (bigrams consisting of only stopwords are removed) and term frequency as concept weights. We leverage the co-occurrence statistics both within and across the entire corpus. We also filtered out bigrams that appear only once in each corpus, yielding better ROUGE scores with lower computational cost. The results without using this low-frequency filtering are shown in the Appendix for comparison. In Table TABREF26 , we present summarization results evaluated by ROUGE BIBREF72 and human judges.", "To compare with the official participants in DUC 2004 BIBREF79 , we selected the top-5 systems submitted in the competition (ranked by R-1), together with the 8 human annotators. The results are presented in Table TABREF27 .", "", "ROUGE. It is a recall-oriented metric that compares system and reference summaries based on n-gram overlaps, which is widely used in summarization evaluation. In this work, we report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-SU4 (R-SU4), and ROUGE-L (R-L) scores, which respectively measure the overlap of unigrams, bigrams, skip-bigram (with a maximum gap length of 4), and longest common subsequence. First, there is no winner for all data sets. MEAD is the best one on camera; SumBasic is best on Stat2016 and mostly on Stat2015; ILP is best on DUC04. The ILP baseline is comparable to the best participant (Table TABREF27 ) and even has the best R-2. PGN is the worst, which is not surprising since it is trained on a different data set, which may not generalize to our data sets. Our method ILP+MC is best on peer review and mostly on Eng and CS2016. Second, compared with ILP, our method works better on Eng, CS2016, movie, and peer.", "These results show our proposed method does not always better than the ILP framework, and no single summarization system wins on all data sets. It is perhaps not surprising to some extent. The no free lunch theorem for machine learning BIBREF80 states that, averaged overall possible data-generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other BIBREF81 .", "", "Human Evaluation. Because ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries, we further perform a human evaluation. For each task, we present a pair of system outputs in a random order, together with one human summary to five Amazon turkers. If there are multiple human summaries, we will present each human summary and the pair of system outputs to turkers. For student responses, we also present the prompt. An example Human Intelligence Task (HIT) is illustrated in Fig. FIGREF32 .", "The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 .", "We calculate the percentage of “wins” (strong or slight preference) for each system among all comparisons with its counterparts. Results are reported in the last column of Table TABREF26 . ILP+MC is preferred significantly more often than ILP on Stat2015, CS2016, and DUC04. There is no significant difference between ILP+MC and SumBasic on student response data sets. Interestingly, a system with better ROUGE scores does not necessarily mean it is more preferred by humans. For example, ILP is preferred more on all three review data sets. Regarding the inter-annotator agreement, we find 48.5% of the individual judgements agree with the majority votes. The agreement scores decomposed by data sets and system pairs are shown in Table TABREF35 . Overall, the agreement scores are pretty low, compared to an agreement score achieved by randomly clicking (45.7%). It has several possibilities. The first one is that many turkers did click randomly (39 out of 160 failed our quality checkpoints). Unfortunately, we did not check all the turkers as we inserted the checkpoints randomly. The second possibility is that comparing two system summaries is difficult for humans, and thus it has a low agreement score. Xiong and Litman xiong-litman:2014:Coling also found that it is hard to make humans agree on the choice of summary sentences. A third possibility is that turkers needed to see the raw input sentences which are not shown in a HIT.", "An interesting observation is that our approach produces summaries with more sentences, as shown in Table TABREF39 . The number of words in the summaries is approximately the same for all methods for a particular corpus, which is constrained by Eq. . For camera, movie and peer reviews, the number of sentences in human summary is 10, and SumBasic and ILP+MC produce more sentences than ILP. It is hard for people to judge which system summaries is closer to a human summary when the summaries are long (216, 242, and 190 words for camera, movie, and peer reviews respectively). For inter-annotator agreement, 50.3% of judgements agree with the majority votes for student response data sets, 47.6% for reviews, and only 46.3% for news documents. We hypothesize that for these long summaries, people may prefer short system summaries, and for short summaries, people may prefer long system summaries. We leave the examination of this finding to future work.", "Table TABREF40 presents example system outputs. This offers an intuitive understanding of our proposed approach." ], [ "In this section, we want to investigate the impact of the low-rank approximation process to the ILP framework. Therefore, in the following experiments, we focus on the direct comparison with the ILP and ILP+MC and leave the comparison to other baselines as future work. The proposed method achieved better summarization performance on Eng, CS2016, movie, and peer than the ILP baseline. Unfortunately, it does not work as expected on two courses for student responses (Stat2015 and Stat2016), review camera and news documents. This leaves the research question when and why the proposed method works better. In order to investigate what are key factors that impact the performance, we would like to perform additional experiments using synthesized data sets.", "A variety of attributes that might impact the performance are summarized in Table TABREF41 , categorized into two types. The input attributes are extracted from the input original documents and the summaries attributes are extracted from human summaries and the input documents as well. Here are some important attributes we expect to have a big impact on the performance.", "The attributes extracted from the corpora are shown in Table TABREF42 . Note, a bigram that appears more often in original documents has a better chance to be included in human summaries as indicated by INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 . This verifies our choice to cut low-frequency bigrams.", "According to the ROUGE scores, our method works better on Eng, CS2016, movie, and peer (Table TABREF26 ). If we group each attribute into two groups, corresponding to whether ILP+MC works better, we do not find significant differences among these attributes. To further understand which factors impact the performance and have more predictive power, we train a binary classification decision tree by treating the 4 working corpora as positive examples and the remaining 4 as negative examples.", "According to the decision tree model, there is only one decision point in the tree: INLINEFORM0 , the ratio of bigrams in human summaries that are in the input only once. Generally, our proposed method works if INLINEFORM1 , except for camera. When INLINEFORM2 is low, it means that annotators either adopt concepts that appear multiple times or just use their own. In this case, the frequency-based weighting (i.e., INLINEFORM3 in Eq. EQREF5 ) can capture the concepts that appear multiple times. On the other hand, when INLINEFORM4 is high, it means that a big number of bigrams appeared only once in the input document. In this case, annotators have difficulty selecting a representative one due to the ambiguous choice. Therefore, we hypothesize,", "To test the predictive power of this attribute, we want to test it on new data sets. Unfortunately, creating new data sets with gold-standard human summaries is expensive and time-consuming, and the new data set may not have the desired property within a certain range of INLINEFORM0 . Therefore, we propose to manipulate the ratio and create new data sets using the existing data sets without additional human annotation. INLINEFORM1 can be represented as follows, DISPLAYFORM0 ", "where INLINEFORM0 INLINEFORM1 ", "There are two different ways to control the ratio, both involving removing input sentences with certain constraints.", "In this way, we obtained different levels of INLINEFORM0 by deleting sentences. The ROUGE scores on the synthesized corpus are shown in Table TABREF52 .", "Our hypothesis H2 is partially valid. When increasing the ratio, ILP+MC has a relative advantage gain over ILP. For example, for Stat2015, ILP+MC is not significantly worse than ILP any more when increasing the ratio from 11.9 to 18.1. For camera, ILP+MC becomes better than ILP when increasing the ratio from 84.9 to 85.8. For Stat2016, CS2016, Eng, more improvements or significant improvements can be found for ILP+MC compared to ILP when increasing the ratio. However, for movie and peer review, ILP+MC is worse than ILP when increasing the ratio.", "We have investigated a number of attributes that might impact the performance of our proposed method. Unfortunately, we do not have a conclusive answer when our method works better. However, we would like to share some thoughts about it.", "First, our proposed method works better on two student responses courses (Eng and CS2016), but not the other two (Stat2015 and Stat2016). An important factor we ignored is that the students from the other two courses are not native English speakers, resulting in significantly shorter responses (4.3 INLINEFORM0 6.0 INLINEFORM1 8.8, 9.1, INLINEFORM2 , Table TABREF42 , the row with id=11). With shorter sentences, there will be less context to leverage the low-rank approximation.", "Second, our proposed method works better on movie and peer reviews, but not camera reviews. As pointed out by Xiong xiong2015helpfulness, both movie reviews and peer reviews are potentially more complicated than the camera reviews, as the review content consists of both the reviewer's evaluations of the subject (e.g., a movie or paper) and the reviewer's references of the subject, where the subject itself is full of content (e.g., movie plot, papers). In contrast, such references in product reviews are usually the mentions of product components or properties, which have limited variations. This characteristic makes review summarization more challenging in these two domains." ], [ "We made the first effort to summarize student feedback using an Integer Linear Programming framework with a low-rank matrix approximation, and applied it to different types of data sets including news articles, product, and peer reviews. Our approach allows sentences to share co-occurrence statistics and alleviates sparsity issue. Our experiments showed that the proposed approach performs better against a range of baselines on the student response Eng and CS2016 on ROUGE scores, but not other courses.", "ROUGE is often adopted in research papers to evaluate the quality of summarization because it is fast and is correlated well to human evaluation BIBREF72 , BIBREF82 . However, it is also criticized that ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries. Different alternatives have been proposed to enhance ROUGE. For example, Graham rankel2016statistical proposed to use content-oriented features in conjunction with linguistic features. Similarly, Cohan and Goharian COHAN16.1144 proposed to use content relevance. At the same time, many researchers supplement ROUGE with a manual evaluation. This is why we conduct evaluations using both ROUGE and human evaluation in this work.", "However, we found that a system with better ROUGE scores does not necessarily mean it is more preferred by humans (§ SECREF28 ). For example, ILP is preferred more on all three review data sets even if it got lower ROUGE scores than the other systems. It coincides with the fact that the ILP generated shorter summaries in terms of the number of sentences than the other two systems (Table TABREF39 ).", "We also investigated a variety of attributes that might impact the performance on a range of data sets. Unfortunately, we did not have a conclusive answer when our method will work better.", "In the future, we would like to conduct a large-scale intrinsic evaluation to examine whether the low-rank matrix approximation captures similar bigrams or not and want to investigate more attributes, such as new metrics for diversity. We would like to explore the opportunities by combing a vector sentence representation learned by a neural network and the ILP framework." ] ], "section_name": [ "Introduction", "Related Work", "ILP Formulation", "Our Approach", "Datasets", "Experiments", "Intrinsic evaluation", "Extrinsic evaluation", "Analysis of Influential Factors", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "8e9ae652ae395c711d3c51d85471832268731ff6" ], "answer": [ { "evidence": [ "In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts\" will be allowed to partially contain “bike elements\" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings." ], "extractive_spans": [ "low-rank approximation of the co-occurrence matrix" ], "free_form_answer": "", "highlighted_evidence": [ "In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3615ad4763df9e2d70ee3f92f6f1a6865689bbb8" ], "answer": [ { "evidence": [ "The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 ." ], "extractive_spans": [], "free_form_answer": "One model per topic.", "highlighted_evidence": [ "Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "e0a1b46f34149f50c21073c5b92c47d9b0cd324f" ], "answer": [ { "evidence": [ "In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation.", "The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing\" and “H1 and Ho conditions\" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts\" in Table TABREF25 is associated with “hard to\"." ], "extractive_spans": [], "free_form_answer": "They evaluate quantitatively.", "highlighted_evidence": [ "In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less.", "An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3cdcf814d3746f4a4a065df89cea5a768a46d557" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What do they constrain using integer linear programming?", "Do they build one model per topic or on all topics?", "Do they quantitavely or qualitatively evalute the output of their low-rank approximation to verify the grouping of lexical items?", "Do they evaluate their framework on content of low lexical variety?" ], "question_id": [ "5758ebff49807a51d080b0ce10ba3f86dcf71925", "e84ba95c9a188fda4563f45e53fbc8728d8b5dab", "caf9819be516d2c5a7bfafc80882b07517752dfa", "b1e90a546dc92e96b657fff5dad8e89f4ac6ed5e" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1. Selected summarization data sets. Publicly available data sets are marked with an asterisk (*). The statistics involve the number of summarization tasks (Tasks), average number of documents per task (Docs/task), average word count per task (WC/task), average word count per sentence (WC/sen), and average number of words in human summaries (Length).", "Table 2. Example prompt, student responses, and one human summary. ‘S1’–‘S10’ are student IDs. The summary and phrase highlights are manually created by annotators. Phrases that bear the same color belong to the same issue. The superscripts of the phrase highlights are imposed by the authors to differentiate colors when", "Table 3. Example movie reviews.", "Table 4. Example sentences from news.", "Table 5. Associated bigrams that do not appear in the sentence, but after Matrix Completion, yield a decent correlation (cell value greater than 0.9) with the corresponding sentence.", "Table 6. A gold-standard data set was extracted from three student response corpora that have phrase-highlighting annotation. Statistics include: the number of bigrams, the number of pairs of similar bigrams and pairs of different bigrams, the number of tuples 〈i, j+, j−〉, and the number of 〈i+, i−, j〉. i is a bigram, j+ is a sentence with a bigram similar to i, and j− is a sentence with a bigram different from i. j is a sentence, i+ is a bigram that is similar to a bigram in j, and i− is a bigram that is different from any bigram in j.", "Table 7. Hypothesise testing: whether the matrix completion (MC) helps to capture similar concepts. ∗ means p < 0.05 using a two-tailed paired t-test.", "Table 8. Summarization results evaluated by ROUGE and human judges. Best results are shown in bold for each data set. ∗ indicates that the performance difference with ILP+MC is statistically significant (p < 0.05) using a two-tailed paired t-test. Underline means that ILP+MC is better than ILP.", "Table 9. A comparison with official participants in DUC 2004, including 8 human annotators (1-8) and the top-5 offical participants (A-E). ‘-’ means a metric is not available.", "Fig. 1. An example HIT from Stat2015, ‘System A’ is ILP+MC and ‘System B’ is SumBasic.", "Fig. 2. Distribution of human preference scores", "Table 10. Inter-annotator agreement measured by the percentage of individual judgements agreeing with the majority votes. ∗ means the human preference to the two systems are significantly different and the system in parenthesis is the winner. Underline means that it is lower than random choices (45.7%).", "Table 11. Number of sentences in the output summaries. ∗ means it is significantly different to ILP+MC (p < 0.05) using a two-tailed paired t-test.", "Table 13. Attributes description, extracted from the input and the human reference summaries.", "Table 14. Attributes extracted from the input and the human reference summaries. The numbers in the row of M ∗ N are divided by 106. The description of each attribute is shown in Table 13.", "Table 15. ROUGE scores on synthesized corpora. Bold scores indicate our approach ILP+MC is better than ILP. + and − mean a score is significantly better and worse respectively (p < 0.05) using a two-tailed paired t-test.", "Table 16. Summarization results without removing low-frequency bigrams. That is, all bigrams are used in the matrix approximation process. Compared to Table 8, by using the cutoff technique, both ILP and ILP+MC get better." ], "file": [ "9-Table1-1.png", "10-Table2-1.png", "11-Table3-1.png", "12-Table4-1.png", "13-Table5-1.png", "15-Table6-1.png", "15-Table7-1.png", "16-Table8-1.png", "17-Table9-1.png", "19-Figure1-1.png", "20-Figure2-1.png", "21-Table10-1.png", "22-Table11-1.png", "23-Table13-1.png", "24-Table14-1.png", "27-Table15-1.png", "34-Table16-1.png" ] }
[ "Do they build one model per topic or on all topics?", "Do they quantitavely or qualitatively evalute the output of their low-rank approximation to verify the grouping of lexical items?" ]
[ [ "1807.09671-Extrinsic evaluation-9" ], [ "1807.09671-Experiments-0", "1807.09671-Intrinsic evaluation-6" ] ]
[ "One model per topic.", "They evaluate quantitatively." ]
455
1611.00514
The Intelligent Voice 2016 Speaker Recognition System
This paper presents the Intelligent Voice (IV) system submitted to the NIST 2016 Speaker Recognition Evaluation (SRE). The primary emphasis of SRE this year was on developing speaker recognition technology which is robust for novel languages that are much more heterogeneous than those used in the current state-of-the-art, using significantly less training data, that does not contain meta-data from those languages. The system is based on the state-of-the-art i-vector/PLDA which is developed on the fixed training condition, and the results are reported on the protocol defined on the development set of the challenge.
{ "paragraphs": [ [ "Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of handset types. The main challenges introduced in this evaluation are duration and language variability. The potential variation of languages addressed in this evaluation, recording environment, and variability of test segments duration influenced the design of our system. Our goal was to utilize recent advances in language normalization, domain adaptation, speech activity detection and session compensation techniques to mitigate the adverse bias introduced in this year's evaluation.", "Over recent years, the i-vector representation of speech segments has been widely used by state-of-the-art speaker recognition systems BIBREF0 . The speaker recognition technology based on i-vectors currently dominates the research field due to its performance, low computational cost and the compatibility of i-vectors with machine learning techniques. This dominance is reflected by the recent NIST i-vector machine learning challenge BIBREF1 which was designed to find the most promising algorithmic approaches to speaker recognition specifically on the basis of i-vectors BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . The outstanding ability of DNN for frame alignment which has achieved remarkable performance in text-independent speaker recognition for English data BIBREF6 , BIBREF7 , failed to provide even comparable recognition performance to the traditional GMM. Therefore, we concentrated on the cepstral based GMM/i-vector system.", "We outline in this paper the Intelligent Voice system, techniques and results obtained on the SRE 2016 development set that will mirror the evaluation condition as well as the timing report. Section SECREF2 describes the data used for the system training. The front-end and back-end processing of the system are presented in Sections SECREF3 and SECREF4 respectively. In Section SECREF5 , we describe experimental evaluation of the system on the SRE 2016 development set. Finally, we present a timing analysis of the system in Section SECREF6 ." ], [ "The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections." ], [ "In this section we will provide a description of the main steps in front-end processing of our speaker recognition system including speech activity detection, acoustic and i-vector feature extraction." ], [ "The first stage of any speaker recognition system is to detect the speech content in an audio signal. An accurate speech activity detector (SAD) can improve the speaker recognition performance. Several techniques have been proposed for SAD, including unsupervised methods based on a thresholding signal energy, and supervised methods that train a speech/non-speech classifier such as support vector machines (SVM) BIBREF8 and Gaussian mixture models (GMMs) BIBREF9 . Hidden markov models (HMMs) BIBREF10 have also been successful. Recently, it has been shown that DNN systems achieve impressive improvement in performance especially in low signal to noise ratios (SNRs) BIBREF11 . In our work we have utilized a two-class DNN-HMM classifier to perform this task. The DNN-HMM hybrid configuration with cross-entropy as the objective function has been trained with the back-propagation algorithm. The softmax layer produces posterior probabilities for speech and non-speech which were then converted into log-likelihoods. Using 2-state HMMs corresponding to speech and non-speech, frame-wise decisions are made by Viterbi decoding. As input to the network, we fed 40-dimensional filter-bank features along with 7 frames from each side. The network has 6 hidden layers with 512 units each. The architecture of our DNN-HMM SAD is shown in Figure FIGREF3 . Approximately 100 hours of speech data from the Switchboard telephony data with word alignments as ground-truth were used to train our SAD. The DNN training in performed on an NVIDIA TITAN X GPU, using Kaldi software BIBREF12 . Evaluated on 50 hours of telephone speech data from the same database, our DNN-HMM SAD indicated a frame-level miss-classification (speech/non-speech) rate of 5.9% whereas an energy-based SAD did not perform better than 20%." ], [ "For acoustic features we have experimented with different configurations of cepstral features. We have used 39-dimensional PLP features and 60-dimensional MFCC features (including their first and second order derivatives) as acoustic features. Moreover, our experiments indicated that the combination of these two feature sets performs particularly well in score fusion. Both PLP and MFCC are extracted at 8kHz sample frequency using Kaldi BIBREF12 with 25 and 20 ms frame lengths, respectively, and a 10 ms overlap (other configurations are the same as Kaldi defaults). For each utterance, the features are centered using a short-term (3s window) cepstral mean and variance normalization (ST-CMVN). Finally, we employed our DNN-HMM speech activity detector (SAD) to drop non-speech frames." ], [ "Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 .", "It has been shown that successive acoustic observation vectors tend to be highly correlated. This may be problematic for maximum a posteriori (MAP) estimation of i-vectors. To investigating this issue, scaling the zero and first order Baum-Welch statistics before presenting them to the i-vector extractor has been proposed. It turns out that a scale factor of 0.33 gives a slight edge, resulting in a better decision cost function BIBREF13 . This scaling factor has been performed in training the i-vector extractor as well as in the testing." ], [ "This section provides the steps performed in back-end processing of our speaker recognition system." ], [ "The nearest-neighbor discriminant analysis is a nonparametric discriminant analysis technique which was proposed in BIBREF14 , and recently used in speaker recognition BIBREF15 . The nonparametric within- and between-class scatter matrices INLINEFORM0 and INLINEFORM1 , respectively, are computed based on INLINEFORM2 nearest neighbor sample information. The NDA transform is then formed using eigenvectors of INLINEFORM3 . It has been shown that as the number of nearest neighbors INLINEFORM4 approaches the number of samples in each class, the NDA essentially becomes the LDA projection. Based on the finding in BIBREF15 , NDA outperformed LDA due to the ability in capturing the local structure and boundary information within and across different speakers. We applied a INLINEFORM5 NDA projection matrix computed using the 10 nearest sample information on centered i-vectors. The resulting dimensionality reduced i-vectors are then whitened using both the training data and the unlabelled development set." ], [ "The enrolment condition of the development set is supposed to provide at least 60 seconds of speech data for each target speaker. Nevertheless, our SAD indicates that the speech content is as low as 26 seconds in some cases. The test segments duration which ranges from 9 to 60 seconds of speech material can result in poor performance for lower duration segments. As indicated in Figure FIGREF8 , more than one third of the test segments have speech duration of less than 20 seconds. We have addressed this issue by proposing a short duration variability compensation method. The proposed method works by first extracting from each audio segment in the unlabelled development set, a partial excerpt of 10 seconds of speech material with random selection of the starting point (Figure FIGREF9 ). Each audio file in the unlabelled development set, with the extracted audio segment will result in two 400-dimensional i-vectors, one with at most 10 seconds of speech material. Considering each pair as one class, we computed a INLINEFORM0 LDA projection matrix to remove directions attributed to duration variability. Moreover, the projected i-vectors are also subjected to a within-class covariance normalization (WCCN) using the same class labels." ], [ "Language-source normalization is an effective technique for reducing language dependency in the state-of-the-art i-vector/PLDA speaker recognition system BIBREF16 . It can be implemented by extending SN-LDA BIBREF17 in order to mitigate variations that separate languages. This can be accomplished by using the language label to identify different sources during training. Language Normalized-LDA (LN-LDA) utilizes a language-normalized within-speaker scatter matrix INLINEFORM0 which is estimated as the variability not captured by the between-speaker scatter matrix, DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are the total scatter and normalized between-speaker scatter matrices respectively, and are formulated as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the total number of i-vectors and DISPLAYFORM0 ", "where INLINEFORM0 is the number of languages in the training set, INLINEFORM1 is the number of speakers in language INLINEFORM2 , INLINEFORM3 is the mean of INLINEFORM4 i-vectors from speaker INLINEFORM5 and language INLINEFORM6 and finally INLINEFORM7 is the mean of all i-vectors in language INLINEFORM8 . We applied a INLINEFORM9 SN-LDA projection matrix to reduce the i-vector dimensions down to 300." ], [ "Probabilistic Linear Discriminant Analysis (PLDA) provides a powerful mechanism to distinguish between-speaker variability, separating sources which characterizes speaker information, from all other sources of undesired variability that characterize distortions. Since i-vectors are assumed to be generated by some generative model, we can break it down into statistically independent speaker- and session-components with Gaussian distributions BIBREF18 , BIBREF19 . Although it has been shown that their distribution follow Student’s INLINEFORM0 rather than Gaussian BIBREF19 distributions, length normalizing the entire set of i-vectors as a pre-processing step can approximately Gaussianize their distributions BIBREF18 and as a result improve the performance of Gaussian PLDA to that of heavy-tailed PLDA BIBREF19 . A standard Gaussian PLDA assumes that an i-vector INLINEFORM1 , is modelled according to DISPLAYFORM0 ", "where, INLINEFORM0 is the mean of i-vectors, the columns of matrix INLINEFORM1 contains the basis for the between-speaker subspace, the latent identity variable INLINEFORM2 denotes the speaker factor that represents the identity of the speaker and the residual INLINEFORM3 which is normally distributed with zero mean and full covariance matrix INLINEFORM4 , represents within-speaker variability.", "For each acoustic feature we have trained two PLDA models. The first out-domain PLDA ( INLINEFORM0 , INLINEFORM1 ) is trained using the training set presented in Table TABREF1 , and the second in-domain PLDA ( INLINEFORM2 , INLINEFORM3 ) was trained using the unlabelled development set. Our efforts to cluster the development set (e.g using the out-domain PLDA) was not very successful as it sounds that almost all of them are uttered by different speakers. Therefore, each i-vector was considered to be uttered by one speaker. We also set the number of speaker factors to 200." ], [ "Domain adaptation has gained considerable attention with the aim of compensating for cross-speech-source variability of in-domain and out-of-domain data. The framework presented in BIBREF20 for unsupervised adaptation of out-domain PLDA parameters resulted in better performance for in-domain data. Using in-domain and out-domain PLDA trained in Section SECREF14 , we interpolated their parameters as follow: DISPLAYFORM0 ", "We chose INLINEFORM0 for making our submission." ], [ "For the one-segment enrolment condition, the speaker model is the length normalized i-vector of that segment, however, for the three-segment enrolment condition, we simply used a length-normalized mean vector of the length-normalizated i-vectors as the speaker model. Each speaker model is tested against each test segment as in the trial list. For each two trial i-vectors INLINEFORM0 and INLINEFORM1 , the PLDA score is computed as DISPLAYFORM0 ", "in which DISPLAYFORM0 DISPLAYFORM1 ", "and INLINEFORM0 and INLINEFORM1 . It has been shown and proved in our experiments that score normalization can have a great impact on the performance of the recognition system. We used the symmetric s-norm proposed in BIBREF19 which normalizes the score INLINEFORM2 of the pair INLINEFORM3 using the formula DISPLAYFORM0 ", "where the means INLINEFORM0 and standard deviations INLINEFORM1 are computed by matching INLINEFORM2 and INLINEFORM3 against the unlabelled set as the impostor speakers, respectively." ], [ "It has been shown that there is a dependency between the value of the INLINEFORM0 threshold and the duration of both enrolment and test segments. Applying the quality measure function (QMF) BIBREF3 enabled us to compensate for the shift in the INLINEFORM1 threshold due to the differences in speech duration. We conducted some experiments to estimate the dependency between the INLINEFORM2 threshold shift on the duration of test segment and used the following QMF for PLDA verfication scores: DISPLAYFORM0 ", "where INLINEFORM0 is the duration of the test segment in seconds." ], [ "In the literature, the performance of speaker recognition is usually reported in terms of calibrated-insensitive equal error rate (EER) or the minimum decision cost function ( INLINEFORM0 ). However, in real applications of speaker recognition there is a need to present recognition results in terms of calibrated log-likelihood-ratios. We have utilized the BOSARIS Toolkit BIBREF21 for calibration of scores. INLINEFORM1 provides an ideal reference value for judging calibration. If INLINEFORM2 is minimized, then the system can be said to be well calibrated.", "The choice of target probability ( INLINEFORM0 ) had a great impact on the performance of the calibration. However, we set INLINEFORM1 for our primary submission which performed the best on the development set. For our secondary submission INLINEFORM2 was used." ], [ "In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set." ], [ "This section reports on the CPU execution time (single threaded), and the amount of memory used to process a single trial, which includes the time for creating models from the enrolment data and the time needed for processing the test segments. The analysis was performed on an Intel(R) Xeon(R) CPU E5-2670 2.60GHz. The results are shown in Table TABREF27 . We used the time command in Unix to report these results. The user time is the actual CPU time used in executing the process (single thread). The real time is the wall clock time (the elapsed time including time slices used by other processes and the time the process spends blocked). The system time is also the amount of CPU time spent in the kernel within the process. We have also reported the memory allocated for each stage of execution. The most computationally intensive stage is the extraction of i-vectors (both MFCC- and PLP-based i-vectors), which also depends on the duration of the segments. For enrolment, we have reported the time required to extract a model from a segment with a duration of 140 seconds and speech duration of 60 seconds. The time and memory required for front-end processing are negligible compared to the i-vector extraction stage, since they only include matrix operations. The time required for our SAD is also reported which increases linearly with the duration of segment." ], [ "We have presented the Intelligent Voice speaker recognition system used for the NIST 2016 speaker recognition evaluation. Our system is based on a score fusion of MFCC- and PLP-based i-vector/PLDA systems. We have described the main components of the system including, acoustic feature extraction, speech activity detection, i-vector extraction as front-end processing, and language normalization, short-duration compensation, channel compensation and domain adaptation as back-end processing. For our future work, we intend to use the ALISP segmentation technique BIBREF22 in order to extract meaningful acoustic units so as to train supervised GMM or DNN models." ] ], "section_name": [ "Introduction", "Training Condition", "Front-End Processing", "Speech Activity Detection", "Acoustic Features", "i-Vector Features", "Back-End Processing", "Nearest-neighbor Discriminant Analysis (NDA)", "Short-Duration Variability Compensation", "Language Normalization", "PLDA", "Domain Adaptation", "Score Computation and Normalization", "Quality Measure Function", "Calibration", "Results and Discussion", "Time Analysis", "Conclusions and Perspectives" ] }
{ "answers": [ { "annotation_id": [ "c098177da67025e6442b6e3416945868669f695d" ], "answer": [ { "evidence": [ "Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ " The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "36367f86934e7042c498a208cbb8f4a2a91f6a22" ], "answer": [ { "evidence": [ "In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set.", "FLOAT SELECTED: Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016." ], "extractive_spans": [], "free_form_answer": "EER 16.04, Cmindet 0.6012, Cdet 0.6107", "highlighted_evidence": [ "In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 .", "FLOAT SELECTED: Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "f20c73c9e64739fd4d3748b59f1581260107da5f" ], "answer": [ { "evidence": [ "The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections." ], "extractive_spans": [ "Cebuano and Mandarin", "Tagalog and Cantonese" ], "free_form_answer": "", "highlighted_evidence": [ "We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do they single out a validation set from the fixed SRE training set?", "How well does their system perform on the development set of SRE?", "Which are the novel languages on which SRE placed emphasis on?" ], "question_id": [ "21a96b328b43a568f9ba74cbc6d4689dbc4a3d7b", "30803eefd7cdeb721f47c9ca72a5b1d750b8e03b", "442f8da2c988530e62e4d1d52c6ec913e3ec5bf1" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1. The description of the data used for training the speaker recognition system.", "Fig. 1. The architecture of our DNN-HMM speech activity detection.", "Fig. 2. The duration of test segments in the development set after dropping non-speech frames.", "Fig. 3. Partial excerpt of 10 second speech duration from an audio speech file.", "Table 2. Performance comparison of the Intelligent Voice speaker recognition system with various analysis on the development protocol of NIST SRE 2016.", "Table 3. CPU execution time and the amount of memory required to process a single trial." ], "file": [ "1-Table1-1.png", "2-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "5-Table2-1.png", "6-Table3-1.png" ] }
[ "How well does their system perform on the development set of SRE?" ]
[ [ "1611.00514-5-Table2-1.png", "1611.00514-Results and Discussion-0" ] ]
[ "EER 16.04, Cmindet 0.6012, Cdet 0.6107" ]
458
1901.00570
Event detection in Twitter: A keyword volume approach
Event detection using social media streams needs a set of informative features with strong signals that need minimal preprocessing and are highly associated with events of interest. Identifying these informative features as keywords from Twitter is challenging, as people use informal language to express their thoughts and feelings. This informality includes acronyms, misspelled words, synonyms, transliteration and ambiguous terms. In this paper, we propose an efficient method to select the keywords frequently used in Twitter that are mostly associated with events of interest such as protests. The volume of these keywords is tracked in real time to identify the events of interest in a binary classification scheme. We use keywords within word-pairs to capture the context. The proposed method is to binarize vectors of daily counts for each word-pair by applying a spike detection temporal filter, then use the Jaccard metric to measure the similarity of the binary vector for each word-pair with the binary vector describing event occurrence. The top n word-pairs are used as features to classify any day to be an event or non-event day. The selected features are tested using multiple classifiers such as Naive Bayes, SVM, Logistic Regression, KNN and decision trees. They all produced AUC ROC scores up to 0.91 and F1 scores up to 0.79. The experiment is performed using the English language in multiple cities such as Melbourne, Sydney and Brisbane as well as the Indonesian language in Jakarta. The two experiments, comprising different languages and locations, yielded similar results.
{ "paragraphs": [ [ "Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word.", "Event detection and prediction from social networks have been studied frequently in recent years. Most of the predictive frameworks use textual content such as likes, shares, and retweets, as features. The text is used as features either by tracking the temporal patterns of keywords, clustering words into topics, or by evaluating sentiment scores and polarity. The main challenge in keyword-based models is to determine which words to use in the first place, especially as people use words in a non-standard way, particularly on Twitter.", "In this research, we aim for detecting large events as soon as they happen with near-live sensitivity. For example, When spontaneous protests occur just after recent news such as increasing taxes or decreasing budget, we need to have indicators to raise the flag of a happening protest. Identifying these indicators requires to select a set of words that are mostly associated with the events of interest such as protests. We then track the volume of these words and evaluate the probability of an event occurring given the current volume of each of the tracked features. The main challenge is to find this set of features that allow such probabilistic classification.", "Using text as features in Twitter is challenging because of the informal nature of the tweets, the limited length of the tweet, platform-specific language, and multilingual nature of Twitter BIBREF0 , BIBREF1 , BIBREF2 . The main challenges for text analysis in Twitter are listed below:", "We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').", "According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”", "In this paper, we propose a method to find the best word-pairs to represent the events of interest. These word-pairs can be used for time series analysis to predict future events as indicated in Figure FIGREF1 . They can also be used as seeds for topic modelling, or to find related posts and word-pairs using dynamic query expansion. The proposed framework uses a temporal filter to identify the spikes within the word-pair signal to binarize the word-pair time series vector BIBREF3 . The binary vector of the word-pair is compared to the protest days vector using Jaccard similarity index BIBREF4 , BIBREF5 , where the word-pairs with highest similarity scores are the most associated word-pairs with protest days. This feature selection method is built upon the assumption that people discuss an event on the day of that event more than on any day before or after the event. This implies that word-pairs related to the event will form a spike on this specific day. Some of the spiking word-pairs are related to the nature of the event itself, such as “taxi protest” or “fair education”. These word-pairs will appear once or twice along the time frame. Meanwhile, more generic word-pairs such as “human rights” or “labour strike” will spike more frequently in the days of events regardless the protest nature.", "To test our method, we developed two experiments using all the tweets in Melbourne and Sydney over a period of 640 days. The total number of tweets exceeded 4 million tweets per day, with a total word-pair count of 12 million different word-pairs per day, forming 6 billion word-pairs over the entire timeframe. The selected word-pairs from in each city are used as features to classify if there will be an event or not on a specific day in that city. We classified events from the extracted word-pairs using 9 classifiers including Naive Bayes, Decision Trees, KNN, SVM, and logistic regression.", "In Section 2, we describe the event detection methods. Section 3 states the known statistical methods used for data association and feature selection. Section 4 describes the proposed feature selection method. Section 5 describes model training and prediction. Section 6 describes the experiment design, the data and the results. Section 7 summarizes the paper, discuss the research conclusion and explains future work." ], [ "Analyzing social networks for event detection is approached from multiple perspectives depending on the research objective. This can be predicting election results, a contest winner, or predicting peoples' reaction to a government decision through protest. The main perspectives to analyze the social networks are (1) content analysis, where the textual content of each post is analyzed using natural language processing to identify the topic or the sentiment of the authors. (2) Network structure analysis, where the relation between the users are described in a tree structure for the follower-followee patterns, or in a graph structure for friendship and interaction patterns. These patterns can be used to know the political preference of people prior to elections. (3) Behavioural analysis of each user including sentiment, response, likes, retweets, location, to identify responses toward specific events. This might be useful to identify users with terrorist intentions. In this section, we will focus on textual content-based models, where text analysis and understanding can be achieved using keywords, topic modelling or sentiment analysis." ], [ "Keyword-based approaches focus on sequence analysis of the time series for each keyword. They also consider different forms for each keyword, including n-gram, skip-gram, and word-pairs BIBREF6 . The keyword-based approaches use the concept of the distributional semantics to group semantically-related words as synonyms to be used as a single feature BIBREF7 . In this approach, keywords are usually associated with events by correlation, entropy or distance metrics. Also, Hossny et al. proposed using SVD with K-Means to strengthen keyword signals, by grouping words having similar temporal patterns, then mapping them into one central word that has minimum distance to the other members of the cluster BIBREF8 .", "Sayyadi et al. used co-occurring keywords in documents such as news articles to build a network of keywords. This network is used as a graph to feed a community detection algorithm in order to identify and classify events BIBREF9 . Takeshi et al. created a probabilistic spatio-temporal model to identify natural disasters events such as earthquakes and typhoons using multiple tweet-based features such as words counts per tweet, event-related keywords, and tweet context. They considered each Twitter user as a social sensor and applied both of the Kalman filter and particle filter for location estimation. This model could detect 96% of Japanese earthquakes BIBREF10 . Zhou et al. developed a named entity recognition model to find location names within tweets and use them as keyword-features for event detection, then estimated the impact of the detected events qualitatively BIBREF11 .", "Weng et al. introduced “Event Detection by Clustering of Wavelet-based Signals” (EDCow). This model used wavelets to analyze the frequency of word signals, then calculated the autocorrelations of each word signal in order to filter outlier words. The remaining words were clustered using a modularity-based graph partitioning technique to form events BIBREF12 . Ning et al. proposed a model to identify evidence-based precursors and forecasts of future events. They used as a set of news articles to develop a nested multiple instance learning model to predict events across multiple countries. This model can identify the news articles that can be used as precursors for a protest BIBREF13 ." ], [ "Topic modelling approaches focus on clustering related words according to their meaning, and indexing them using some similarity metric such as cosine similarity or Euclidean distance. The most recognized techniques are (1) Latent Semantic Indexing (LSI), where the observation matrix is decomposed using singular value decomposition and the data are clustered using K-Means BIBREF7 ,(2) Latent Dirichlet Allocation (LDA), where the words are clustered using Gaussian mixture models (GMM) according to the likelihood of term co-occurrence within the same context BIBREF14 , (3) Word2Vec, which uses a very large corpus to compute continuous vector representations, where we can apply standard vector operations to map one vector to another BIBREF15 .", "Cheng et al. suggested using space-time scan statistics to detect events by looking for clusters within data across both time and space, regardless of the content of each individual tweet BIBREF16 . The clusters emerging during spatio-temporal relevant events are used as an indicator of a currently occurring event, as people tweet more often about event topics and news. Ritter et al. proposed a framework that uses the calendar date, cause and event type to describe any event in a way similar to the way Twitter users mention the important events. This framework used temporal resolution, POS tagging, an event tagger, and named entity recognition. Once features are extracted, the association between the combination of features and the events is measured in order to know what are the most important features and how significant the event will be BIBREF17 .", "Zhou et al. introduced a graphical model to capture the information in the social data including time, content, and location, calling it location-time constrained topic (LTT). They measure the similarity between the tweets using KL divergence to assess media content uncertainty. Then, they measure the similarity between users using a “longest common subsequence” (LCS) metric. They aggregate the two measurements by augmenting weights as a measure for message similarity. They used the similarity between streaming posts in a social network to detect social events BIBREF18 .", "Ifrim et al. presented another approach for topic detection that combines aggressive pre-processing of data with hierarchical clustering of tweets. The framework analyzes different factors affecting the quality of topic modelling results BIBREF19 , along with real-time data streams of live tweets to produce topic streams in close to real-time rate.", "Xing et al. presented the mutually generative Latent Dirichlet Allocation model (MGE-LDA) that uses hashtags and topics, as they both are generated mutually by each other in tweets. This process models the relationship between topics and hashtags in tweets, and uses them both as features for event discovery BIBREF20 . Azzam et al. used deep learning and cosine similarity to understand short text posts in communities of question answering BIBREF21 , BIBREF22 . Also, Hossny et al. used inductive logic programming to understand short sentences from news for translation purposes BIBREF23 " ], [ "The third approach is to identify sentiment through the context of the post, which is another application for distributional semantics requiring a huge amount of training data to build the required understanding of the context. Sentiment analysis approaches focus on recognizing the feelings of the crowd and use the score of each feeling as a feature to calculate the probability of social events occurring. The sentiment can represent the emotion, attitude, or opinion of the user towards the subject of the post. One approach to identify sentiment is to find smiley faces such as emoticons and emojis within a tweet or a post. Another approach is to use a sentiment labelled dictionary such as SentiWordNet to assess the sentiment associated with each word.", "Generally, sentiment analysis has not been used solely to predict civil unrest, especially as it still faces the challenges of sarcasm and understanding negation in ill-formed sentences. Meanwhile, it is used as an extra feature in combination with features from other approaches such as keywords and topic modelling. Paul et al. proposed a framework to predict the results of the presidential election in the United States in 2017. The proposed framework applied topic modelling to identify related topics in news, then used the topics as seeds for Word2Vec and LSTM to generate a set of enriched keywords. The generated keywords will be used to classify politics-related tweets, which are used to evaluate the sentiment towards each candidate. The sentiment score trend is used to predict the winning candidate BIBREF24 ." ], [ "Keywords can be selected as features as a single term or a word-pair or a skip-grams, which can be used for classification using multiple methods such as mutual information, TF-IDF, INLINEFORM0 , or traditional statistical methods such as ANOVA or correlation. Our problem faces two challenges: the first is the huge number of word-pairs extracted from all tweets for the whole time frame concurrently, which make some techniques such as TF-IDF and INLINEFORM1 computationally unfeasible as they require the technique to be distributable on parallel processors on a cluster. The second challenge is the temporal nature of the data which require some techniques that can capture the distributional semantics of terms along with the ground truth vector. In this section, we describe briefly a set of data association methods used to find the best word-pairs to identify the event days.", "Pearson correlation measures the linear dependency of the response variable on the independent variable with the maximum dependency of 1 and no dependency of zero. This technique needs to satisfy multiple assumptions to assess the dependency properly. These assumptions require the signals of the variables to be normally distributed, homoskedastic, stationary and have no outliers BIBREF25 , BIBREF26 . In social network and human-authored tweets, we cannot guarantee that the word-pairs signals throughout the timeframe will satisfy the required assumptions. Another drawback for Pearson correlation is that zero score does not necessarily imply no correlation, while no correlation implies zero score.", "Spearman is a rank-based metric that evaluates the linear association between the rank variables for each of the independent and the response variables. It simply evaluates the linear correlation between the ranked variables of the original variables. Spearman correlation assumes the monotonicity of the variables but it relaxes the Pearson correlation requirements of the signal to be normal, homoskedastic and stationary. Although the text signals in the social network posts do not satisfy the monotonicity assumption, Spearman correlation can select some word-pairs to be used as predictive features for classification. Spearman correlation has the same drawback of Pearson correlation that zero score does not necessarily imply no correlation while no correlation implies zero score.", "Distance correlation is introduced by Szekely et al . (2007) to measure the nonlinear association between two variables BIBREF27 . Distance correlation measures the statistical distance between probability distributions by dividing the Brownian covariance (distance covariance) between X and Y by the product of the distance standard deviations BIBREF28 , BIBREF29 .", "TF-IDF is the short of term frequency-inverse document frequency technique that is used for word selection for classification problems. The concept of this technique is to give the words that occur frequently within a specific class high weight as a feature and to penalize the words that occur frequently among multiple classes. for example; the term “Shakespeare” is considered a useful feature to classify English literature documents as it occurs frequently in English literature and rarely occurs in any other kind of documents. Meanwhile, the term “act” will occur frequently in English literature, but it also occurs frequently in the other types of document, so this term will be weighted for its frequent appearance and it will be penalized for its publicity among the classes by what we call inverse-document-frequency BIBREF30 .", "Mutual information is a metric for the amount of information one variable can tell the other one. MI evaluates how similar are the joint distributions of the two variables with the product of the marginal distributions of each individual variable, which makes MI more general than correlation as it is not limited by the real cardinal values, it can also be applied to binary, ordinal and nominal values BIBREF31 . As mutual information uses the similarity of the distribution, it is not concerned with pairing the individual observations of X and Y as much as it cares about the whole statistical distribution of X and Y. This makes MI very useful for clustering purposes rather than classification purposes BIBREF32 .", "Cosine similarity metric calculates the cosine of the angle between two vectors. The cosine metric evaluates the direction similarity of the vectors rather the magnitude similarity. The cosine similarity score equals to 1 if the two vectors have the angle of zero between the directions of two vectors, and the score is set to zero when the two vectors are perpendicular BIBREF33 . if the two vectors are oriented to opposite directions, the similarity score is -1. Cosine similarity metric is usually used in the positive space, which makes the scores limited within the interval of [0,1].", "Jaccard index or coefficient is a metric to evaluate the similarity of two sets by comparing their members to identify the common elements versus the distinct ones. The main advantage of Jaccard similarity is it ignores the default value or the null assumption in the two vectors and it only considers the non-default correct matches compared to the mismatches. This consideration makes the metric immune to the data imbalance. Jaccard index is similar to cosine-similarity as it retains the sparsity property and it also allows the discrimination of the collinear vectors." ], [ "The proposed model extracts the word-pairs having a high association with event days according to the distributional semantic hypothesis and use them for training the model that will be used later for the binary classification task BIBREF34 as illustrated in figure FIGREF10 . The first step is the data preparation where we load all the tweets for each day, then we exclude the tweets having URLs or unrelated topics, then we clean each tweet by removing the hashtags, non-Latin script and stopping words. Then we lemmatize and stem each word in each tweet using Lancaster stemmer. Finally, we extract the word-pairs in each tweet. The word-pair is the list of n words co-occurring together within the same tweet.", "The second step is to count the frequency of each word-pair per each day, which are used as features to classify the day as either event or no-event day. The formulation is a matrix with rows as word-pairs and columns as days and values are daily counts of each word-pair. The third step is to binarize the event count vector (ground truth) as well as the vector of each word-pair. Binarizing the event vector is done by checking if the count of events in each day is larger than zero. The binarization of the word-pair count vectors is done by applying a temporal filter to the time series in order to identify the spikes as explained in equation EQREF11 , where the days with spikes are set to ones and days without spike are set to zeros BIBREF35 , BIBREF36 . DISPLAYFORM0 ", "Where x is the count of the word-pair, INLINEFORM0 is the time variable, INLINEFORM1 is the time difference, the threshold is the minimum height of the spike. Afterwards, we compare the binary vector for each word-pair with the ground truth binary vector using the Jaccard similarity index as stated in equation EQREF12 BIBREF4 , BIBREF5 . The word-pairs are then sorted descendingly according to the similarity score. The word-pairs with the highest scores are used as a feature for training the model in the fourth step. DISPLAYFORM0 ", "where WP is the word pair vector, GT is the ground truth vector" ], [ "Once we identify the best word-pairs to be used as features for classification, we split the time series vector of each word-pair into a training vector and a testing vector. then we use the list of the training vectors of the selected word-pairs to train the model as explained in subsection SECREF13 and use the list of testing vectors for the same word-pairs to classify any day to event/nonevent day SECREF16 ." ], [ "The third step is to train the model using the set of features generated in the first step. We selected the Naive Bayes classifier to be our classification technique for the following reasons: (1) the high bias of the NB classifier reduces the possibility of over-fitting, and our problem has a high probability of over-fitting due to the high number of features and the low number of observations, (2) the response variable is binary, so we do not need to regress the variable real value as much as we need to know the event-class, and (3) The counts of the word-pairs as independent variables are limited between 0 and 100 occurrences per each day, which make the probabilistic approaches more effective than distance based approaches.", "The training process aims to calculate three priori probabilities to be used later in calculating the posterior probabilities: (1) the probability of each word-pair count in a specific day given the status of the day as “event” or “non-event”. (2) the priori conditional probability of each word-pair given event status INLINEFORM0 . (3) the probability of each event class as well as the probability of each word-pair as stated in equations EQREF15 and EQREF15 . DISPLAYFORM0 DISPLAYFORM1 ", "where INLINEFORM0 is the word-pair, INLINEFORM1 is any class for event occurrence and word-pair is the vector of counts for the word-pairs extracted from tweets" ], [ "Once the priori probabilities are calculated using the training data, we use them to calculate the posterior probability of both classes of event-days and non-event-days given the values of the word-pairs using the equation EQREF17 . DISPLAYFORM0 ", "where INLINEFORM0 is the word-pair, INLINEFORM1 INLINEFORM2 As the word-pairs are assumed to be independent and previously known from the training step." ], [ "The experiments are designed to detect civil unrest events in Melbourne on any specific day. In this experiment, we used all the tweets posted from Melbourne within a time frame of 640 days between December 2015 and September 2017. This time frame will be split into 500 days for model training and 140 days for model testing on multiple folds. The tweet location is specified using (1) longitude and latitude meta-tag, (2) tweet location meta-tag, (3) the profile location meta-tag, and (4) The time zone meta-tag. The total number of tweets exceeded 4 million tweets daily. Firstly, we cleaned the data from noisy signals, performed stemming and lemmatization then extracted the word-pairs from each tweet and count each word-pair per each day. Example 1 illustrates how each tweet is cleaned, prepared and vectorized before being used for training the model. The steps are explained below:", "As explained in example 1, each word-pair will be transformed from a vector of integer values into a vector of binary values and denoted as INLINEFORM0 . INLINEFORM1 will be used to calculate the Jaccard similarity index of the binary vector with the events binary vector. Each word-pair will have a similarity score according to the number of word-pair spikes matching the event days. This method uses the concept of distributional semantic, where the co-occurring signals are likely to be semantically associated BIBREF34 .", "Example 1: Original Tweet: Protesters may be unmasked in wake of Coburg clash https://t.co/djjVIfzO3e (News) #melbourne #victoria Cleaned Tweet: protest unmask wake coburg clash news List of two-words-word-pairs: [`protest', `unmask'], [`protest', `wake'], [`protest', `Coburg'], ..., [`unmask', `wake'], [`unmask', `coburg'],..., [`clash', `news'] [`protest', `unmask'] training : INLINEFORM0 [`protest', `unmask'] testing : INLINEFORM1 Assuming a time frame of 20 days word-pair: [2,3,3,4,5,3,2,3,8,3,3,1,3,9,3,1,2,4,5,1] Spikes ( INLINEFORM2 ): [0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0] Events( INLINEFORM3 ): [0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0] INLINEFORM4 ", "Once we selected the most informative word-pairs as features, we will use the raw values to train the Naive Bayes classifier. The classifier is trained using 500 days selected randomly along the whole timeframe, then it is used to predict the other 140 days. To ensure the robustness of our experiment, We applied 10-folds cross-validation, where we performed the same experiment 10 times using 10 different folds of randomly selected training and testing data. The prediction achieved an average area under the ROC curve of 90%, which statistically significant and achieved F-score of 91%, which is immune to data imbalance as listed in table TABREF18 . Figure FIGREF25 shows the ROC curves for the results of a single fold of Naive Bayes classification that uses the features extracted by each selection methods. The classification results of the proposed method outperformed the benchmarks and state of the art developed by Cui et al. (2017), Nguyen et al. (2017), Willer et al. (2016), and Adedoyin-Olowe et al. (2016) as illustrated in the table TABREF33 BIBREF12 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 .", "The same experiment has been applied to Sydney, Brisbane and Perth in Australia on a time frame of 640 days with 500 days training data and 140 days testing data and the results were similar to Melbourne results with average AUC of 0.91 and average F-Score of 0.79. To ensure that the proposed method is language independent, we used the same method to classify civil unrest days in Jakarta using the Indonesian language, the classification scores were lower than the average scores for English language by 0.05 taking into consideration that we did not apply any NLP pre-processing to the Indonesian tweets such as stemming and lemmatization.", "To verify the robustness of this feature selection method, we tested the selected features using multiple classifiers such as KNN, SVM, naive Bayes and decision trees. The results emphasized that the word-pairs selected using the spike-matching method achieve better AUC scores than the other correlation methods as listed in table TABREF19 " ], [ "In this paper, we proposed a framework to detect civil unrest events by tracking each word-pair volume in twitter. The main challenge with this model is to identify the word-pairs that are highly associated with the events with predictive power. We used temporal filtering to detect the spike within the time series vector and used Jaccard similarity to calculate the scores of each word-pair according to its similarity with the binary vector of event days. These scores are used to rank the word-pairs as features for prediction.", "Once the word-pairs are identified, we trained a Naive Bayes classifier to identify any day in a specific region to be an event or non-event days. We performed the experiment on both Melbourne and Sydney regions in Australia, and we achieved a classification accuracy of 87% with the precision of 77%, Recall of 82 %, area under the ROC curve of 91% and F-Score of 79%. The results are all achieved after 10-folds randomized cross-validation as listed in table TABREF32 .", "The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs." ], [ "This research was fully supported by the School of Mathematical Sciences at the University of Adelaide. All the data, computation and technical framework were supported by Data-To-Decision-Collaborative-Research-Center (D2DCRC). " ] ], "section_name": [ "Introduction", "Event Detection Methods", "Keyword-based approaches", "Topic modelling approaches", "Sentiment analysis approaches", "Feature Selection Methods", "Spike Matching Method:", "Training and Prediction", "Training the model:", "Predicting Civil Unrest", "Experiments and Results", "Conclusions", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "fcade84668262402efadc9ee49bc327eaa737535" ], "answer": [ { "evidence": [ "The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ ". This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "37c191ddacfecc6a406ef0ee28b2ad036797c948" ], "answer": [ { "evidence": [ "FLOAT SELECTED: TABLE II: A comparison of classification AVCs using word-pairs extracted by different feature selection methods" ], "extractive_spans": [], "free_form_answer": "Logistic regression", "highlighted_evidence": [ "FLOAT SELECTED: TABLE II: A comparison of classification AVCs using word-pairs extracted by different feature selection methods" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "b8e16554b3536239e9245851014bde6b088c54dc" ], "answer": [ { "evidence": [ "FLOAT SELECTED: TABLE II: A comparison of classification AVCs using word-pairs extracted by different feature selection methods" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: TABLE II: A comparison of classification AVCs using word-pairs extracted by different feature selection methods" ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "54a51b2def70523bab904d8ee2ff5d01565e15ce" ], "answer": [ { "evidence": [ "We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').", "According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”" ], "extractive_spans": [], "free_form_answer": "By using a Bayesian approach and by using word-pairs, where they extract all the pairs of co-occurring words within each tweet. They search for the words that achieve the highest number of spikes matching the days of events.", "highlighted_evidence": [ "We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet.", "To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Do the authors suggest any future extensions to this work?", "Which of the classifiers showed the best performance?", "Were any other word similar metrics, besides Jaccard metric, tested?", "How are the keywords associated with events such as protests selected?" ], "question_id": [ "524abe0ab77db168d5b2f0b68dba0982ac5c1d8e", "858c51842fc3c1f3e6d2d7d853c94f6de27afade", "7c9c73508da628d58aaadb258f3a9d4cc2a8a9b3", "7b2bf0c1a24a2aa01d49f3c7e1bdc7401162c116" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "twitter", "twitter", "twitter", "twitter" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Fig. I: The the proposed pipeline extracts the word-pairs matching events of interest, then use the extracted word-pairs as features to detect civil unrest events.", "Fig. 2: The spikes in the time series signal for the word-pair ('Melbourne', 'Ral') are matched with the event days that represented as dotted vertical lines. The green lines represent spikes matching events. The Blue lines represent events with no matching spikes and red lines represent spikes that did not match any event.", "Fig. 3: The detailed pipeline for data processing, count vectorization, word-pair selection, training and prediction.", "TABLE II: A comparison of classification AVCs using word-pairs extracted by different feature selection methods", "TABLE III: The average results of event detection in multiple cities using multiple metrics after cross validating the results on 10 folds", "TABLE IV: The classification scores compared to benchmarks" ], "file": [ "1-FigureI-1.png", "2-Figure2-1.png", "5-Figure3-1.png", "7-TableII-1.png", "7-TableIII-1.png", "8-TableIV-1.png" ] }
[ "Which of the classifiers showed the best performance?", "How are the keywords associated with events such as protests selected?" ]
[ [ "1901.00570-7-TableII-1.png" ], [ "1901.00570-Introduction-5", "1901.00570-Introduction-4" ] ]
[ "Logistic regression", "By using a Bayesian approach and by using word-pairs, where they extract all the pairs of co-occurring words within each tweet. They search for the words that achieve the highest number of spikes matching the days of events." ]
468
2002.10832
BERT Can See Out of the Box: On the Cross-modal Transferability of Text Representations
Pre-trained language models such as BERT have recently contributed to significant advances in Natural Language Processing tasks. Interestingly, while multilingual BERT models have demonstrated impressive results, recent works have shown how monolingual BERT can also be competitive in zero-shot cross-lingual settings. This suggests that the abstractions learned by these models can transfer across languages, even when trained on monolingual data. In this paper, we investigate whether such generalization potential applies to other modalities, such as vision: does BERT contain abstractions that generalize beyond text? We introduce BERT-gen, an architecture for text generation based on BERT, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate a positive answer to our research question, and the proposed model obtains substantial improvements over the state-of-the-art on two established Visual Question Generation datasets.
{ "paragraphs": [ [ "The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Recent research efforts BIBREF2 have shown how BERT encodes abstractions that generalize across languages, even when trained on monolingual data only. This contradicts the common belief BIBREF3, BIBREF4 that a shared vocabulary and joint training on multiple languages are essential to achieve cross-lingual generalization capabilities. In this work, we further investigate the generalization potentials of large pre-trained LMs, this time moving to a cross-modal setup: does BERT contain abstractions that generalize beyond text?", "In the Artificial Intelligence community, several works have investigated the longstanding research question of whether textual representations encode visual information. On the one hand, a large body of research called language grounding considers that textual representations lack visual commonsense BIBREF5, and intend to ground the meaning of words BIBREF6, BIBREF7 and sentences BIBREF8, BIBREF9 in the perceptual world. In another body of work, textual representations have successfully been used to tackle multi-modal tasks BIBREF10 such as Zero-Shot Learning BIBREF11, Visual Question Answering BIBREF12 or Image Captioning BIBREF13. Following the latter line of research, in this paper we evaluate the potential of pre-trained language models to generalize in the context of Visual Question Generation (VQG) BIBREF14.", "The Visual Question Generation task allows us to investigate the cross-modal capabilities of BERT: unlike Image Captioning (where the input is only visual) or VQA (where the input is visual and textual), VQG is a multi-modal task where input can be textual and/or visual. VQG data usually includes images and the associated captions, along with corresponding questions about the image; thus, different experimental setups can be designed to analyze the impact of each modality. Indeed, the questions can be generated using i) textual (the caption), ii) visual (the image), or iii) multi-modal (both the caption and the image) input.", "From a practical standpoint, the VQG task has several applications: robots or AI assistants could ask questions rooted in multi-modal data (e.g. fusing conversational data with visual information from captors and cameras), in order to refine their interpretation of the situation they are presented with. It could also allow systems relying on knowledge-bases to gain visual common sense and deal with the Human Reporting Bias BIBREF15, which states that the content of images and text are intrinsically different, since visual common sense is rarely explicitly stated in text.", "Recently, BERT-based Multi-Modal Language Models have been proposed BIBREF16, BIBREF17, BIBREF18, BIBREF19 to tackle multi-modal tasks, using different approaches to incorporate visual data within BERT. From these works, it is left to explore whether the cross-modal alignment is fully learned, or it is to some extent already encoded in the BERT abstractions. Therefore, in contrast with those approaches, we explicitly avoid using the following complex mechanisms:", "Multi-modal supervision: all previous works exploit an explicit multi-modal supervision through a pre-training step; the models have access to text/image pairs as input, to align their representations. In contrast, our model can switch from text-only to image-only mode without any explicit alignment.", "Image-specific losses: specific losses such as Masked RoI (Region of Interest) Classification with Linguistic Clues BIBREF19 or sentence-image prediction BIBREF18 have been reported helpful to align visual and text modalities. Instead, we only use the original MLM loss from BERT (and not its entailment loss).", "Non-linearities: we explore a scenario in which the only learnable parameters, for aligning image representations to BERT, are those of simple linear projection layer. This allows us to assess whether the representations encoded in BERT can transfer out-of-the-box to another modality.", "Furthermore, to the best of our knowledge, this paper is the first attempt to investigate multi-modal text generation using pre-trained language models. We introduce BERT-gen, a text generator based on BERT, that can be applied both in mono and multi-modal settings. We treat images similarly to text: while a sentence is seen as a sequence of (sub)word tokens, an image is seen as a sequence of objects associated to their corresponding positions (bounding boxes). We show how a simple linear mapping, projecting visual embeddings into the first layer, is enough to ground BERT in the visual realm: text and image object representations are found to be effectively aligned, and the attention over words transfers to attention over the relevant objects in the image.", "Our contributions can be summarized as follows:", "we introduce BERT-gen, a novel method for generating text using BERT, that can be applied in both mono and multi-modal settings;", "we show that the semantic abstractions encoded in pre-trained BERT can generalize to another modality;", "we report state-of-the art results on the VQG task;", "we provide extensive ablation analyses to interpret the behavior of BERT-gen under different configurations (mono- or multi- modal)." ], [ "Learning unsupervised textual representations that can be applied to downstream tasks is a widely investigated topic in the literature. Text representations have been learned at different granularities: words with Word2vec BIBREF20, sentences with SkipThought BIBREF21, paragraphs with ParagraphVector BIBREF22 and contextualized word vectors with ELMo BIBREF23. Other methods leverage a transfer-learning approach by fine-tuning all parameters of a pre-trained model on a target task, a paradigm which has become mainstream since the introduction of BERT BIBREF0. BERT alleviates the problem of the uni-directionality of most language models (i.e. where the training objective aims at predicting the next word) by proposing a new objective called Masked Language Model (MLM). Under MLM, some words, that are randomly selected, are masked; the training objective aims at predicting them." ], [ "Following the successful application of BERT BIBREF0, and its derivatives, across a great majority of NLP tasks, several research efforts have focused on the design of multi-modal versions of BERT. VideoBERT BIBREF24, a joint video and text model, is pre-trained on a huge corpus of YouTube videos, and applied to action classification and video captioning tasks on the YouCook II dataset BIBREF25. The video is treated as a “visual sentence\" (each frame being a “visual word\") that is processed by the BERT Transformer.", "Concerning models jointly treating information from images and text, visual features extracted from the image are used as “visual words\", and a [SEP] special token is employed to separate textual and visual tokens. In the literature, visual features are object features extracted with a Faster R-CNN BIBREF26 – with the notable exception of BIBREF27 who used pooling layers from a CNN. A first body of work exploit single-stream Transformers in which visual features are incorporated in a BERT-like Transformer: this is the case for VisualBERT BIBREF18, VL-BERT BIBREF19, Unicoder-VL BIBREF28 and B2T2 BIBREF29. Other works, such as ViLBERT BIBREF16 and LXMERT BIBREF17 have investigated two-stream approaches: these models employ modality-specific encoders built on standard Transformer blocks, which are then fused into a cross-modal encoder. Interestingly, none of the aforementioned models have been used for generation tasks such as VQG, tackled in this work." ], [ "The text-based Question Generation task has been largely studied by the NLP community BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36. However, its visual counterpart, Visual Question Generation (VQG), has been comparatively less explored than standard well-known multi-modal tasks such as Visual Question Answering (VQA) BIBREF37, BIBREF38, BIBREF39, BIBREF40, Visual Dialog BIBREF41, BIBREF42, or Image Captioning BIBREF43, BIBREF44, BIBREF45.", "The VQG task was first introduced by BIBREF46 in their Neural Self Talk model: the goal is to gain knowledge about an image by iteratively generating questions (VQG) and answering them (VQA). The authors tackle the task with a simple RNN conditioned on the image, following Image Captioning works such as BIBREF45.", "Suitable data for the VQG task can come from standard image datasets on which questions have been manually annotated, such as $VQG_{COCO}$, $VQG_{Flickr}$, $VQG_{Bing}$ BIBREF14 , each consisting of 5000 images with 5 questions per image. Alternatively, VQG samples can be derived from Visual Question Answering datasets, such as $VQA1.0$ BIBREF47, by “reversing\" them (taking images as inputs and questions as outputs).", "A variety of approaches have been proposed. BIBREF14 use a standard Gated Recurrent Neural Network, i.e. a CNN encoder followed by a GRU decoder to generate questions. BIBREF48 aim at generating, for a given image, multiple visually grounded questions of varying types (what, when, where, etc.); similarly, BIBREF49 generate diverse questions using Variational Autoencoders. In BIBREF50, VQG is jointly tackled along its dual task (VQA), just as BIBREF46. In BIBREF51, BIBREF52, the image (processed by a CNN) and the caption (processed by a LSTM) are combined in a mixture module, followed by a LSTM decoder to generate the question, leading to state-of-the-art results on the VQG task on $VQA1.0$ data. More recently, BIBREF53 incorporate multiple cues – place information obtained from PlaceCNN BIBREF54, caption, tags – and combine them within a deep Bayesian framework where the contribution of each cue is weighted to predict a question, obtaining the current state-of-the-art results on $VQG_{COCO}$." ], [ "In VQG, the objective is to generate a relevant question from an image and/or its caption. The caption $X_{txt}$ is composed of $M$ tokens $txt_1, ..., txt_M$; these tokens can be words or subwords (smaller than word) units depending on the tokenization strategy used. As BERT uses subword tokenization, throughout this paper we will refer to subwords as our tokenization units.", "The proposed model is illustrated in Figure FIGREF11. In SECREF12, we detail how images are incorporated in the Transformer framework. In SECREF14, we present BERT-gen, a novel approach to use BERT for text generation." ], [ "In this work, we treat textual and visual inputs similarly, by considering both as sequences. Since an image is not a priori sequential, we consider the image $X_{img}$ as a sequence of object regions $img_1, ..., img_N$, as described below.", "The images are first processed as in BIBREF17: a Faster-RCNN BIBREF26, pre-trained on Visual Genome BIBREF55, detects the $N=36$ most salient regions (those likely to include an object) per image. The weights of the Faster-RCNN are fixed during training, as we use the precomputed representations made publicly available by BIBREF56. Each image is thus represented by a sequence of $N=36$ semantic embeddings $f_1, ... f_{N}$ (one for each object region) of dimension 2048, along with the corresponding bounding box coordinates $b_1, ... b_{N}$ of dimension 4. With this approach, the BERT attention can be computed at the level of objects or salient image regions; had we represented images with traditional CNN features, the attention would instead correspond to a uniform grid of image regions without particular semantics, as noted in BIBREF56. To build an object embedding $o_j$ encoding both the object region semantics and its location in the image, we concatenate $f_j$ and $b_j$ ($j\\in [1,N]$). Hence, an image is seen as a sequence of $N=36$ visual representations (each corresponding to an object region) $o_1,..., o_N$. Object region representations $o_i$ are ordered by the relevance of the object detected, and the model has access to their relative location in the image through the vectors $b_i$.", "To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\\hspace{-1.00006pt}\\times \\hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding." ], [ "We cast the VQG task as a classic sequence-to-sequence BIBREF57 modeling framework:", "where the input $X=X_{txt}$ in caption-only mode, $X = X_{img}$ in image-only mode, and $X =X_{img} \\oplus X_{txt}$ in a multi-modal setup; $Y = {y_1,..., y_T}$ is the question composed of $T$ tokens. $\\Theta $ are the parameters of the BERT model; $W$ represents the weights of the linear layer used for projecting visual input to the BERT embedding layer.", "As mentioned earlier, BERT is a Transformer BIBREF1 encoder pre-trained using the Masked Language Model (MLM) objective: tokens within the text are replaced with a [MASK] special token, and the model is trained to predict them. Since BERT was not trained with an unidirectional objective, its usage for text generation is not straightforward.", "To generate text, BIBREF58 propose to stack a Transformer decoder, symmetric to BERT. However, the authors report training difficulties since the stacked decoder is not pre-trained, and propose a specific training regime, with the side-effect of doubling the number of parameters. BIBREF59 opt for an intermediate step of self-supervised training, introducing a unidirectional loss. As detailed below, we propose a relatively simpler, yet effective, method to use BERT out-of-the-box for text generation." ], [ "We simply use the original BERT decoder as is, initially trained to generate the tokens masked during its pre-training phase. It consists in a feed-forward layer, followed by normalization, transposition of the embedding layer, and a softmax over the vocabulary." ], [ "At inference time, to generate the first token of the question $y_1$, we concatenate [MASK] to the input tokens $X$, then encode $X \\oplus \\texttt {[MASK]}$ with the BERT encoder, and feed the output of the encoder to the decoder; $y_1$ is the output of the decoder for the [MASK] token. Subsequently, given $y_1$, we concatenate it to the input tokens and encode $X \\oplus y_1 \\oplus \\texttt {[MASK]}$ to predict the next token $y_2$. This procedure is repeated until the generation of a special token [EOS] signaling the end of the sentence." ], [ "As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the representations of the previous tokens. To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens.", "This novel method allows to use pre-trained encoders for text generation. In this work, we initialize our model with the parameters from BERT-base. Nonetheless, the methodology can be applied to any pre-trained Transformer encoders such as RoBERTa BIBREF60, or Ernie BIBREF61." ], [ "The proposed model can be used in either mono- or multi- modal setups. This is accomplished by activating or deactivating specific modules." ], [ "Our main objective is to measure whether the textual knowledge encoded in pre-trained BERT can be beneficial in a cross-modal task. Thus, we define the three following experimental setups, which we refer to as Step 1, 2, and 3:" ], [ "Deactivating the Visual embedding module (see Figure FIGREF11), the model has only access to textual input, i.e. the caption. The model is initialized with the BERT weights and trained according to Equation DISPLAY_FORM15." ], [ "Conversely, deactivating the Textual embedding module (see Figure FIGREF11), the model has only access to the input image, not the caption. To indicate the position $t$ of $img_t$ in the sequence, we sum the BERT positional embedding of $t$ to the visual representation of $img_t$, just as we would do for a text token $txt_t$. The model is initialized with the weights learned during step 1. All BERT-gen $\\Theta $ weights are frozen, and only the linear layer $W$ is learnable. Hence, if the model is able to learn to generate contextualized questions w.r.t. the image, it shows that a simple linear layer is enough to bridge the two modalities." ], [ "The full model is given access to both image and caption inputs. In this setup, we separate the two different inputs by a special BERT token [SEP]. Thus, the input sequence for the model takes the form of $\\texttt {[CLS]}, img_1,..., img_N, \\texttt {[SEP]}, txt_1,..., txt_M$. In step 1, only BERT-gen $\\Theta $ parameters are learned, as no image input was given. In step 2, $W$ is trained while keeping $\\Theta $ frozen. Finally then, in step 3, we fine-tune the model using both image and text inputs: the model is initialized with the parameters $\\Theta $ learned during step 1 and the $W$ learned during step 2, and we unfreeze all parameters." ], [ "Additionally, we report results obtained with: Image only (unfreeze), where the BERT-gen parameters $\\Theta $ are not frozen; and Image+Caption (from scratch) where the model is learned without the intermediate steps 1 and 2: the BERT-gen parameters $\\Theta $ are initialized with the weights from pre-trained BERT while $W$ is randomly initialized." ], [ "We conduct our experiments using two established datasets for Visual Question Generation:" ], [ "Introduced by BIBREF14, it contains 2500 training images, 1250 validation images and 1250 test images from MS COCO BIBREF62; each image has 5 corresponding questions and 5 ground-truth captions." ], [ "The Visual Question Answering BIBREF47 dataset can be used to derive VQG data BIBREF50. The task is reversed: instead of answering the question based on the image (VQA), models are called to generate a relevant question given the image (VQG). Also based on MS COCO, it contains 82783 training images, 40504 validation images and 81434 testing images. In $VQA1.0$, each image has 3 associated questions. Since the test set of MS COCO does not contain ground-truth captions, we generated artificial captions for it using NeuralTalk2 BIBREF45: for fair comparison, we used exactly the same model as BIBREF52 (MDN-Joint)." ], [ "We compare the proposed model to the following:" ], [ "BIBREF46 Questions are generated by a RNN conditioned on the image: at each generation step, the distribution over the vocabulary is computed and used to sample the next generated word. This baseline enables to generate diverse questions over the same image, as the word selection process is non-deterministic." ], [ "BIBREF46 Using the above model, selecting words with maximum probability from the computed distribution." ], [ "BIBREF52 State-of-the-art model on $VQA1.0$, based on joint usage of caption and image information." ], [ "BIBREF53 State-of-the-art on $VQG_{COCO}$. The model jointly leverages on multiple cues (the image, place information, caption, tags) to generate questions." ], [ "We report the following metrics for all experiments, consistently with previous works:" ], [ "BIBREF63 A precision-oriented metric, originally proposed to evaluate machine translation. It is based on the counts of overlapping n-grams between the generated sequences and the human references." ], [ "BIBREF64 The recall-oriented counterpart to BLEU metrics, again based on n-gram overlaps." ], [ "BIBREF65 The harmonic mean between precision and recall w.r.t. unigrams. As opposed to the other metrics, it also accounts for stemming and synonymy matching." ], [ "BIBREF66 Originally designed for Image Captioning, it uses human consensus among the multiple references, favoring rare words and penalizing frequent words. This feature is particularly relevant for our task, as the automatically generated questions often follow similar patterns such as “What is the [...] ?\". Indeed, we verify experimentally (cf Table and Table ) that the CIDEr metric is the most discriminant in our quantitative results." ], [ "All models are implemented in PyText BIBREF67. For all our experiments we used a single NVIDIA RTX 2080 Ti GPU, a batch size of 128 and 5 epochs. We used the Adam optimizer with the recommended parameters for BERT: learning rate is set at $2e^{-5}$ with a warmup of $0.1$. The most computationally expensive experiment is the step 3 described above: for this model, completion of one epoch demands 30 seconds and 2 minutes for $VQG_{COCO}$ and $VQA$ datasets, respectively. Metrics were computed using the Python package released by BIBREF33." ], [ "In Table , we report quantitative results for the VQG task on $VQA1.0$. The Caption only model already shows strong improvements for all metrics over state-of-the-art models. For this text-only model, the impressive performance can mostly be attributed to BERT, demonstrating once again the benefits obtained using pre-trained language models. In our second step (Image only), the BERT $\\Theta $ parameters are frozen and only those of the cross-modal projection matrix $W$ are learned. Despite using a simple linear layer, the model is found to perform well, generating relevant questions given only visual inputs.", "This suggests that the conceptual representations encoded in pre-trained language models such as BERT can effectively be used beyond text. Further, we report an additional Image only experiment, this time unfreezing the BERT parameters $\\Theta $ – see Step 2 (unfreeze) in Table . As could be expected, since the model is allowed more flexibility, the performance is found to further improve.", "Finally, in our third step (Image + Caption), we obtain the highest scores, for all metrics. This indicates that the model is able to effectively leverage the combination of textual and visual inputs. Indeed, complementary information from both modalities can be exploited by the self-attention mechanism, making visual and textual tokens interact to generate the output sequences. Again, we additionally report the results obtained bypassing the intermediate steps 1 and 2: for the model denoted as Step 3 (from scratch) (last row of Table ), $\\Theta $ parameters are initialized with the original weights from pre-trained BERT, while the $W$ matrix is randomly initialized. Under this experimental condition, we observe lower performances, a finding that consolidates the importance of the multi-step training procedure we adopted.", "In Table , we report quantitative VQG results on $VQG_{COCO}$. These are globally consistent with the ones above for $VQA1.0$. However, we observe two main differences. First, a bigger relative improvement over the state-of-the-art. As the efficacy of pre-trained models is boosted in small-data scenarios BIBREF68, this difference can be explained by the smaller size of $VQG_{COCO}$. Second, we note that the Caption only model globally outperforms all other models, especially on the discriminant CIDEr metric. This can be explained by the fact that, in $VQG_{COCO}$, the captions are human-written (whereas they are automatically generated for $VQA1.0$) and, thus, of higher quality; moreover, the smaller size of the dataset could play a role hindering the ability to adapt to the visual modality. Nonetheless, the strong performances obtained for Step 2 compared to the baselines highlight the effectiveness of our method to learn a cross-modal projection even with a relatively small number of training images." ], [ "To get more in-depth understanding of our models, we report human assessment results in Table . We randomly sampled 50 images from the test set of $VQA1.0$. Each image is paired with its caption, the human-written question used as ground-truth, and the output for our three models: Caption only, Image only and Image+Caption. We asked 3 human annotators to assess the quality of each question using a Likert scale ranging from 1 to 5, for the following criteria: readability, measuring how well-written the question is; caption relevance, how relevant the question is w.r.t. to the caption; and, image relevance, how relevant the question is toward the image. For caption and image relevance, the annotators were presented with only the caption and only the image, respectively.", "We observe that all evaluated models produce well-written sentences, as readability does not significantly differ compared to human's questions. Unsurprisingly, the Caption only model shows a higher score for caption relevance, while the relatively lower image relevance score can be explained by the automatically generated and thus imperfect captions in the $VQA1.0$ dataset. Comparatively, the Image only model obtains lower caption relevance and higher image relevance scores; this indicates that the cross modal projection is sufficient to bridge modalities, allowing BERT to generate relevant questions toward the image. Finally, the Image + Caption model obtains the best image relevance among our models, consistently the quantitative results reported in Tables and ." ], [ "To interpret the behavior of attention-based models, it is useful to look at which tokens are given higher attention BIBREF69. In Figure FIGREF44, we present two images $A$ and $B$, along with their captions and the three generated questions corresponding to our three experimental setups (Caption only, Image only and Image + Caption). For this analysis, we average the attention vectors of all the heads in the last layer, and highlight the textual and visual tokens most attended by the models.", "For both images, the Caption only model attends to salient words in the caption. The Image only model remains at least as much relevant: on image $A$, it generates a question about a table (with an unclear attention). Interestingly, for image $B$, the Image only model corrects a mistake from step 1: it is a woman holding an umbrella rather than a man, and the attention is indeed focused on the woman in the image. Finally, the Image + Caption model is able to generate fitting questions about the image, with relatively little relevance to the caption: for image $A$, Image + Caption the model generates “What time is it?\" while paying attention to the clock; for image $B$, Image + Caption generates “What is the color of the umbrella ?\", focusing the attention on the umbrella. The captions of either samples include no mentions of clocks or umbrellas, further indicating effective alignment between visual and textual representations." ], [ "We carry out an additional experiment to analyze the text/vision alignment for each model. Figure FIGREF46 shows the cross-modal similarity $X_{sim}$ for different model scenarios, computed at each BERT-base layer from 1 to 12. We define the cross-modal similarity $X_{sim}$ as the cosine similarity between the vector representations of both modalities. These vectors are the two continuous space representations from a model when given as input either i) an image, or ii) its corresponding caption. We represent these captions and images vectors with the special BERT token [CLS], following previous works BIBREF70 where [CLS] is used to represent the entire sequence.", "The reported values correspond to the average cross-modal similarity calculated for all the examples of $VQG_{COCO}$ test set. In addition to the setups described in Section SECREF4 (Caption-only, Image-only and Image + Caption), we also report $X_{sim}$ for Random Transformer, a BERT architecture with random weights. As expected, its $X_{sim}$ is close to zero.", "All the other models are based on BERT. As suggested by BIBREF71, the first layers in BERT tend to encode lower-level language information. This might explain why the models show similar $X_{sim}$ scores up to the 9th layer, and diverge afterwards: the weights for those layers remain very similar between our fine-tuned models.", "For the last layer ($l=12$), we observe that $\\textit {Caption only} < \\textit {Image only} < \\textit {Image + Caption}$. The Caption only model has never seen images during training, and therefore is not able to encode semantic information given only images as input. Still, its reported $X_{sim} > 0$ can be attributed to the fact that, when fine-tuned on VQG during Step 1, BERT-gen encodes task-specific information in the [CLS] token embedding (e.g. a question ends with a “?\" and often begins with “What/Where/Who\"). $\\textit {Image only} > \\textit {Caption only}$ can be explained by the learning of the cross-modal projection $W$. However, since BERT is not fine-tuned, the model learns a “contortion\" allowing it to align text and vision. Finally, Image + Caption $>$ Image only can be attributed to BERT fine-tuning, contributing to an increase in the observed gap, and its emergence in earlier layers." ], [ "We investigated whether the abstractions encoded in a pre-trained BERT model can generalize beyond text. We proposed BERT-gen, a novel methodology that allows to directly generate text from out-of-the-box pre-trained encoders, either in mono- or multi- modal setups. Moreover, we applied BERT-gen to Visual Question Generation, obtaining state-of-the-art results on two established datasets. We showed how a simple linear projection is sufficient to effectively align visual and textual representations.", "In future works, we plan to extend BERT-gen to other modalities, such as audio or video, exploring the potential interactions that can emerge in scenarios where more than two modalities are present." ] ], "section_name": [ "Introduction", "Related Work ::: Unsupervised Pre-trained Language Models", "Related Work ::: Multi-modal Language Models", "Related Work ::: Visual Question Generation", "Model", "Model ::: Representing an Image as Text", "Model ::: BERT-gen: Text Generation with BERT", "Model ::: BERT-gen: Text Generation with BERT ::: Decoder", "Model ::: BERT-gen: Text Generation with BERT ::: Next Token Prediction", "Model ::: BERT-gen: Text Generation with BERT ::: Attention Trick", "Model ::: BERT-gen: Text Generation with BERT ::: Modality-specific setups", "Experimental Protocol", "Experimental Protocol ::: 1. Caption only", "Experimental Protocol ::: 2. Image only", "Experimental Protocol ::: 3. Image + Caption", "Experimental Protocol ::: Ablations", "Experimental Protocol ::: Datasets", "Experimental Protocol ::: Datasets ::: @!START@$VQG_{COCO}$@!END@", "Experimental Protocol ::: Datasets ::: @!START@$VQA$@!END@", "Experimental Protocol ::: Baselines", "Experimental Protocol ::: Baselines ::: Sample", "Experimental Protocol ::: Baselines ::: Max", "Experimental Protocol ::: Baselines ::: MDN-Joint", "Experimental Protocol ::: Baselines ::: MC-SBN", "Experimental Protocol ::: Metrics", "Experimental Protocol ::: Metrics ::: BLEU", "Experimental Protocol ::: Metrics ::: ROUGE", "Experimental Protocol ::: Metrics ::: METEOR", "Experimental Protocol ::: Metrics ::: CIDEr", "Experimental Protocol ::: Implementation details", "Results", "Results ::: Human Evaluation", "Model Discussion ::: What does the model look at?", "Model Discussion ::: Cross-modal alignment", "Conclusion and Perspectives" ] }
{ "answers": [ { "annotation_id": [ "c160258e03c07ab73b09b1897524021026c99d1c" ], "answer": [ { "evidence": [ "As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the representations of the previous tokens. To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens." ], "extractive_spans": [], "free_form_answer": "They use a left-to-right attention mask so that the input tokens can only attend to other input tokens, and the target tokens can only attend to the input tokens and already generated target tokens.", "highlighted_evidence": [ "To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "37edb82c1f237f746616a093a7095c96cb872a2b" ], "answer": [ { "evidence": [ "To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\\hspace{-1.00006pt}\\times \\hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding." ], "extractive_spans": [], "free_form_answer": "The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards.", "highlighted_evidence": [ "To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\\hspace{-1.00006pt}\\times \\hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "What is different in BERT-gen from standard BERT?", "How are multimodal representations combined?" ], "question_id": [ "ed7985e733066cd067b399c36a3f5b09e532c844", "cd8de03eac49fd79b9d4c07b1b41a165197e1adb" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1. Model overview. Captions are encoded via BERT embeddings, while visual embeddings (blue) are obtained via a linear layer, used to project image representations to the embedding layer dimensions.", "Table 1. Quantitative VQG results on V QA1.0. We report results from previous works in the upper block, and those obtained by our proposed models in the bottom block.", "Table 2. Quantitative VQG results on V QGCOCO . We report results from previous works in the upper block, and those obtained by the our proposed models in the middle block. Human Performance is taken from Mostafazadeh et al. (2016).", "Table 3. Human evaluation results for three criterions: readability, caption relevance and image relevance. Two-tailed t-test results are reported in comparison to ”Human” (*: p < 0.05).", "Figure 2. Qualitative Analysis. We show the outputs of the three steps of our model, using two samples from the V QA1.0 test set. 1) Caption only; 2) Image only; 3) Image + Caption. Words and object regions with maximum attention are underlined and marked, respectively. Color intensity is proportional to attention.", "Figure 3. Cross-modal similarity Xsim between images in V QGCOCO and corresponding captions at each BERT encoding layer. Captions and images are embedded here using the [CLS] special token." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Figure2-1.png", "8-Figure3-1.png" ] }
[ "What is different in BERT-gen from standard BERT?", "How are multimodal representations combined?" ]
[ [ "2002.10832-Model ::: BERT-gen: Text Generation with BERT ::: Attention Trick-0" ], [ "2002.10832-Model ::: Representing an Image as Text-2" ] ]
[ "They use a left-to-right attention mask so that the input tokens can only attend to other input tokens, and the target tokens can only attend to the input tokens and already generated target tokens.", "The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards." ]
470
1709.02271
Leveraging Discourse Information Effectively for Authorship Attribution
We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution. We present a novel method to embed discourse features in a Convolutional Neural Network text classifier, which achieves a state-of-the-art result by a substantial margin. We empirically investigate several featurization methods to understand the conditions under which discourse features contribute non-trivial performance gains, and analyze discourse embeddings.
{ "paragraphs": [ [ "Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse.", "Our work builds upon these prior studies by exploring an effective method to (i) featurize the discourse information, and (ii) integrate discourse features into the best text classifier (i.e., CNN-based models), in the expectation of achieving state-of-the-art results in AA.", " BIBREF1 (henceforth F&H14) made the first comprehensive attempt at using discourse information for AA. They employ an entity-grid model, an approach introduced by BIBREF6 for the task of ordering sentences. This model tracks how the grammatical relations of salient entities (e.g., subj, obj, etc.) change between pairs of sentences in a document, thus capturing a form of discourse coherence. The grid is summarized into a vector of transition probabilities. However, because the model only records the transition between two consecutive sentences at a time, the coherence is local. BIBREF2 (henceforth F15) further extends the entity-grid model by replacing grammatical relations with discourse relations from Rhetorical Structure Theory BIBREF7 . Their study uses a linear-kernel SVM to perform pairwise author classifications, where a non-discourse model captures lexical and syntactic features. They find that adding the entity-grid with grammatical relations enhances the non-discourse model by almost 1% in accuracy, and using RST relations provides an improvement of 3%. The study, however, works with only one small dataset and their models produce overall unremarkable performance ( INLINEFORM0 85%). BIBREF8 propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. However, we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks.", "In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically,", "We explore these questions using two approaches to represent salient entities: grammatical relations, and RST discourse relations. We apply these models to datasets of varying sizes and genres, and find that adding any discourse information improves AA consistently on longer documents, but has mixed results on shorter documents. Further, embedding the discourse features in a parallel CNN at the input end yields better performance than concatenating them to the output layer as a feature vector (Section SECREF3 ). The global featurization is more effective than the local one. We also show that SVMs, which can only use discourse probability vectors, neither produce a competitive performance (even with fine-tuning), nor generalize in using the discourse information effectively." ], [ "Entity-grid model. Typical lexical features for AA are relatively superficial and restricted to within the same sentence. F&H14 hypothesize that discourse features beyond the sentence level also help authorship attribution. In particular, they propose an author has a particular style for representing entities across a discourse. Their work is based on the entity-grid model of BIBREF6 (henceforth B&L).", "The entity-grid model tracks the grammatical relation (subj, obj, etc.) that salient entities take on throughout a document as a way to capture local coherence . A salient entity is defined as a noun phrase that co-occurs at least twice in a document. Extensive literature has shown that subject and object relations are a strong signal for salience and it follows from the Centering Theory that you want to avoid rough shifts in the center BIBREF9 , BIBREF10 . B&L thus focus on whether a salient entity is a subject (s), object (o), other (x), or is not present (-) in a given sentence, as illustrated in Table TABREF1 . Every sentence in a document is encoded with the grammatical relation of all the salient entities, resulting in a grid similar to Table TABREF6 .", "The local coherence of a document is then defined on the basis of local entity transitions. A local entity transition is the sequence of grammatical relations that an entity can assume across INLINEFORM0 consecutive sentences, resulting in {s,o,x,-} INLINEFORM1 possible transitions. Following B&L, F&H14 consider sequences of length INLINEFORM2 =2, that is, transitions between two consecutive sentences, resulting in INLINEFORM3 =16 possible transitions. The probability for each transition is then calculated as the frequency of the transition divided by the total number of transitions. This step results in a single probability vector for every document, as illustrated in Table TABREF2 .", "B&L apply this model to a sentence ordering task, where the more coherent option, as evidenced by its transition probabilities, was chosen. In authorship attribution, texts are however assumed to already be coherent. F&H14 instead hypothesize that an author unconsciously employs the same methods for describing entities as the discourse unfolds, resulting in discernible transition probability patterns across multiple of their texts. Indeed, F&H14 find that adding the B&L vectors increases the accuracy of AA by almost 1% over a baseline lexico-syntactic model.", "RST discourse relations. F15 extends the notion of tracking salient entities to RST. Instead of using grammatical relations in the grid, RST discourse relations are specified. An RST discourse relation defines the relationship between two or more elementary discourse units (EDUs), which are spans of text that typically correspond to syntactic clauses. In a relation, an EDU can function as a nucleus (e.g., result.N) or as a satellite (e.g., summary.S). All the relations in a document then form a tree as in Figure FIGREF8 .", "F15 finds that RST relations are more effective for AA than grammatical relations. In our paper, we populate the entity-grid in the same way as F15's “Shallow RST-style” encoding, but use fine-grained instead of coarse-grained RST relations, and do not distinguish between intra-sentential and multi-sentential RST relations, or salient and non-salient entities. We explore various featurization techniques using the coding scheme.", "CNN model. shrestha2017 propose a convolutional neural network formulation for AA tasks (detailed in Section SECREF3 ). They report state-of-the-art performance on a corpus of Twitter data BIBREF11 , and compare their models with alternative architectures proposed in the literature: (i) SCH: an SVM that also uses character n-grams, among other stylometric features BIBREF11 ; (ii) LSTM-2: an LSTM trained on bigrams BIBREF12 ; (iii) CHAR: a Logistic Regression model that takes character n-grams BIBREF13 ; (iv) CNN-W: a CNN trained on word embeddings BIBREF14 . The authors show that the model CNN2 produces the best performance overall. Ruder:16 apply character INLINEFORM0 -gram CNNs to a wide range of datasets, providing strong empirical evidence that the architecture generalizes well. Further, they find that including word INLINEFORM1 -grams in addition to character INLINEFORM2 -grams reduces performance, which is in agreement with BIBREF5 's findings." ], [ "Building on shrestha2017's work, we employ their character-bigram CNN (CNN2), and propose two extensions which utilize discourse information: (i) CNN2 enhanced with relation probability vectors (CNN2-PV), and (ii) CNN2 enhanced with discourse embeddings (CNN2-DE). The CNN2-PV allows us to conduct a comparison with F&H14 and F15, which also use relation probability vectors.", "CNN2. CNN2 is the baseline model with no discourse features. Illustrated in Figure FIGREF10 (center), it consists of (i) an embedding layer, (ii) a convolution layer, (iii) a max-pooling layer, and (iv) a softmax layer. We briefly sketch the processing procedure and refer the reader to BIBREF4 for mathematical details.", "The network takes a sequence of character bigrams INLINEFORM0 as input, and outputs a multinomial INLINEFORM1 over class labels as the prediction. The model first looks up the embedding matrix to produce a sequence of embeddings for INLINEFORM2 (i.e., the matrix INLINEFORM3 ), then pushes the embedding sequence through convolutional filters of three bigram-window sizes INLINEFORM4 , each yielding INLINEFORM5 feature maps. We then apply the max-over-time pooling BIBREF15 to the feature maps from each filter, and concatenate the resulting vectors to obtain a single vector INLINEFORM6 , which then goes through the softmax layer to produce predictions.", "CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer.", "CNN2-DE. In this model (Figure FIGREF10 , center+right), we embed discourse features in high-dimensional space (similar to char-bigram embeddings). Let INLINEFORM0 be a sequence of discourse features, we treat it in a similar fashion to the char-bigram sequence INLINEFORM1 , i.e. feeding it through a “parallel” convolutional net (Figure FIGREF10 right). The operation results in a pooling vector INLINEFORM2 . We concatenate INLINEFORM3 to the pooling vector INLINEFORM4 (which is constructed from INLINEFORM5 ) then feed INLINEFORM6 to the softmax layer for the final prediction." ], [ "We begin by introducing the datasets (Section SECREF15 ), followed by detailing the featurization methods (Section SECREF17 ), the experiments (Section SECREF22 ), and finally reporting results (Section SECREF26 )." ], [ "The statistics for the three datasets used in the experiments are summarized in Table TABREF16 .", "novel-9. This dataset was compiled by F&H14: a collection of 19 novels by 9 nineteenth century British and American authors in the Project Gutenberg. To compare to F&H14, we apply the same resampling method (F&H14, Section 4.2) to correct the imbalance in authors by oversampling the texts of less-represented authors.", "novel-50. This dataset extends novel-9, compiling the works of 50 randomly selected authors of the same period. For each author, we randomly select 5 novels for a total 250 novels.", "IMDB62. IMDB62 consists of 62K movie reviews from 62 users (1,000 each) from the Internet Movie dataset, compiled by Seroussi:11. Unlike the novel datasets, the reviews are considerably shorter, with a mean of 349 words per text." ], [ "As described in Section SECREF2 , in both the GR and RST variants, from each input entry we start by obtaining an entity grid.", "CNN2-PV. We collect the probabilities of entity role transitions (in GR) or discourse relations (in RST) for the entries. Each entry corresponds to a probability distribution vector.", "CNN2-DE. We employ two schema for creating discourse feature sequences from an entity grid. While we always read the grid by column (by a salient entity), we vary whether we track the entity across a number of sentences (n rows at a time) or across the entire document (one entire column at a time), denoted as local and global reading respectively.", "For the GR discourse features, in the case of local reading, we process the entity roles one sentence pair at a time (Figure FIGREF18 , left). For example, in processing the pair INLINEFORM0 , we find the first non-empty role INLINEFORM1 for entity INLINEFORM2 in INLINEFORM3 . If INLINEFORM4 also has a non-empty role INLINEFORM5 in the INLINEFORM6 , we collect the entity role transition INLINEFORM7 . We then proceed to the following entity INLINEFORM8 , until we process all the entities in the grid and move to the next sentence pair. For the global reading, we instead read the entity roles by traversing one column of the entire document at a time (Figure FIGREF18 , right). The entity roles in all the sentences are read for one entity: we collect transitions for all the non-empty roles (e.g., INLINEFORM9 , but not INLINEFORM10 ).", "For the RST discourse features, we process non-empty discourse relations also through either local or global reading. In the local reading, we read all the discourse relations in a sentence (a row) then move on to the next sentence. In the global reading, we read in discourse relations for one entity at a time. This results in sequences of discourse relations for the input entries." ], [ "Baseline-dataset experiments. All the baseline-dataset experiments are evaluated on novel-9. As a comparison to previous work (F15), we evaluate our models using a pairwise classification task with GR discourse features. In her model, novels are partitioned into 1000-word chunks, and the model is evaluated with accuracy. Surpassing F15's SVM model by a large margin, we then further evaluate the more difficult multi-class task, i.e., all-class prediction simultaneously, with both GR and RST discourse features and the more robust F1 evaluation. In this multi-class task, we implement two SVMs to extend F15's SVM models: (i) SVM2: a linear-kernel SVM which takes char-bigrams as input, as our CNNs, and (ii) SVM2-PV: an updated SVM2 which takes also probability vector features.", "Further, we are interested in finding a performance threshold on the minimally-required input text length for discourse information to “kick in”. To this end, we chunk the novels into different sizes: 200-2000 words, at 200-word intervals, and evaluate our CNNs in the multi-class condition.", "Generalization-dataset experiments. To confirm that our models generalize, we pick the best models from the baseline-dataset experiments and evaluate on the novel-50 and IMDB62 datasets. For novel-50, the chunking size applied is 2000-word as per the baseline-dataset experiment results, and for IMDB62, texts are not chunked (i.e., we feed the models with the original reviews directly). For model comparison, we also run the SVMs (i.e., SVM2 and SVM2-PV) used in the baseline-dataset experiment. All the experiments conducted here are multi-class classification with macro-averaged F1 evaluation.", "Model configurations. Following F15, we perform 5-fold cross-validation. The embedding sizes are tuned on novel-9 (multi-class condition): 50 for char-bigrams; 20 for discourse features. The learning rate is 0.001 using the Adam Optimizer BIBREF18 . For all models, we apply dropout regularization of 0.75 BIBREF19 , and run 50 epochs (batch size 32). The SVMs in the baseline-dataset experiments use default settings, following F15. For the SVMs in the generalization-dataset experiments, we tuned the hyperparameters on novel-9 with a grid search, and found the optimal setting as: stopping condition tol is 1e-5, at a max-iteration of 1,500." ], [ "Baseline-dataset experiments. The results of the baseline-dataset experiments are reported in Table TABREF24 , TABREF25 and Figure FIGREF27 . In Table TABREF24 , Baseline denotes the dumb baseline model which always predicts the more-represented author of the pair. Both SVMs are from F15, and we report her results. SVM (LexSyn) takes character and word bi/trigrams and POS tags. SVM (LexSyn-PV) additionally includes probability vectors, similar to our CNN2-PV. In this part of the experiment, while the CNNs clear a large margin over SVMs, adding discourse in CNN2-PV brings only a small performance gain.", "Table TABREF25 reports the results from the multi-class classification task, the more difficult task. Here, probability vector features (i.e., PV) again fail to contribute much. The discourse embedding features, on the other hand, manage to increase the F1 score by a noticeable amount, with the maximal improvement seen in the CNN2-DE (global) model with RST features (by 2.6 points). In contrast, the discourse-enhanced SVM2-PVs increase F1 by about 1 point, with overall much lower scores in comparison to the CNNs. In general, RST features work better than GR features.", "The results of the varying-sizes experiments are plotted in Figure FIGREF27 . Again, we observe the overall pattern that discourse features improve the F1 score, and RST features procure superior performance. Crucially, however, we note there is no performance boost below the chunk size of 1000 for GR features, and below 600 for RST features. Where discourse features do help, the GR-based models achieve, on average, 1 extra point on F1, and the RST-based models around 2.", "Generalization-dataset experiments. Table TABREF28 summarizes the results of the generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62, as expected with short text inputs (mean=349 words/review), the discourse features in general do not add further contribution. Even the best model CNN2-DE brings only marginal improvement, confirming our findings from varying the chunk size on novel-9, where discourse features did not help at this input size. Equipped with discourse features, SVM2-PV performs slightly better than SVM2 on novel-50 (by 0.4 with GR, 0.9 with RST features). On IMDB62, the same pattern persists for the SVMs: discourse features do not make noticeable improvements (by 0.0 and 0.5 with GR and RST respectively)." ], [ "General analysis. Overall, we have shown that discourse information can improve authorship attribution, but only when properly encoded. This result is critical in demonstrating the particular value of discourse information, because typical stylometric features such as word INLINEFORM0 -grams and POS tags do not add additional performance improvements BIBREF3 , BIBREF5 .", "In addition, the type of discourse information and the way in which it is featurized are tantamount to this performance improvement: RST features provide overall stronger improvement, and the global reading scheme for discourse embedding works better than the local one. The discourse embedding proves to be a superior featurization technique, as evidenced by the generally higher performance of CNN2-DE models over CNN2-PV models. With an SVM, where the option is not available, we are only able to use relation probability vectors to obtain a very modest performance improvement.", "Further, we found an input-length threshold for the discourse features to help (Section SECREF26 ). Not surprisingly, discourse does not contribute on shorter texts. Many of the feature grids are empty for these shorter texts– either there are no coreference chains or they are not correctly resolved. Currently we only have empirical results on short novel chunks and movie reviews, but believe the finding would generalize to Twitter or blog posts.", "Discourse embeddings. It does not come as a surprise that discourse embedding-based models perform better than their relation probability-based peers. The former (i) leverages the weight learning of the entire computational graph of the CNN rather than only the softmax layer, as the PV models do, and (ii) provides a more fine-grained featurization of the discourse information. Rather than merely taking a probability over grammatical relation transitions (in GR) or discourse relation types (in RST), in DE-based models we learn the dependency between grammatical relation transitions/discourse relations through the INLINEFORM0 -sized filter sweeps.", "To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work.", "Global vs. Local featurization. As described in Section SECREF17 , the global reading processes all the discourse features for one entity at a time, while the local approach reads one sentence (or one sentence pair) at a time. In all the relevant experiments, global featurization showed a clear performance advantage (on average 1 point gain in F1). Recall that the creation of the grids (both GR and RST) depend on coreference chains of entities (Section SECREF2 ), and only the global reading scheme takes advantage of the coreference pattern whereas the local reading breaks the chains. To find out whether coreference pattern is the key to the performance difference, we further ran a probe experiment where we read RST discourse relations in the order in which EDUs are arranged in the RST tree (i.e., left-to-right), and evaluated this model on novel-50 and IMDB62 with the same hyperparameter setting. The F1 scores turned out to be very close to the CNN2-DE (local) model, at 97.5 and 90.9. Based on this finding, we tentatively confirm the importance of the coreference pattern, and intend to further investigate how exactly it matters for the classification performance.", "GR vs. RST. RST features in general effect higher performance gains than GR features (Table TABREF28 ). The RST parser produces a tree of discourse relations for the input text, thus introducing a “global view.” The GR features, on the other hand, are more restricted to a “local view” on entities between consecutive sentences. While a deeper empirical investigation is needed, one can intuitively imagine that identifying authorship by focusing on the local transitions between grammatical relations (as in GR) is more difficult than observing how the entire text is organized (as in RST)." ], [ "We have conducted an in-depth investigation of techniques that (i) featurize discourse information, and (ii) effectively integrate discourse features into the state-of-the-art character-bigram CNN classifier for AA. Beyond confirming the overall superiority of RST features over GR features in larger and more difficult datasets, we present a discourse embedding technique that is unavailable for previously proposed discourse-enhanced models. The new technique enabled us to push the envelope of the current performance ceiling by a large margin.", "Admittedly, in using the RST features with entity-grids, we lose the valuable RST tree structure. In future work, we intend to adopt more sophisticated methods such as RecNN, as per Ji:17, to retain more information from the RST trees while reducing the parameter size. Further, we aim to understand how discourse embeddings contribute to AA tasks, and find alternatives to coreference chains for shorter texts." ] ], "section_name": [ "Introduction", "Background", "Models", "Experiments and Results", "Datasets", "Featurization", "Experiments", "Results", "Analysis", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "381f894273131354d32d5b877ecc961d1d8f07e8" ], "answer": [ { "evidence": [ "To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work." ], "extractive_spans": [], "free_form_answer": "They perform t-SNE clustering to analyze discourse embeddings", "highlighted_evidence": [ "To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "758c0f0bef0d32d915cb1cabb47a589e7c1b2230" ], "answer": [ { "evidence": [ "In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically," ], "extractive_spans": [ "character bigram CNN classifier" ], "free_form_answer": "", "highlighted_evidence": [ "In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "b2ac76e96ac8d4e2fb383be2f406141c6c711b9c" ], "answer": [ { "evidence": [ "CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer." ], "extractive_spans": [], "free_form_answer": "They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer.", "highlighted_evidence": [ "In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). ", "Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "efcd9b10f30573d4efd97c5adc0cd81fc5d0db9d" ], "answer": [ { "evidence": [ "CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer." ], "extractive_spans": [], "free_form_answer": "Entity grid with grammatical relations and RST discourse relations.", "highlighted_evidence": [ "In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How are discourse embeddings analyzed?", "What was the previous state-of-the-art?", "How are discourse features incorporated into the model?", "What discourse features are used?" ], "question_id": [ "cfbccb51f0f8f8f125b40168ed66384e2a09762b", "feb4e92ff1609f3a5e22588da66532ff689f3bcc", "f10325d022e3f95223f79ab00f8b42e3bb7ca040", "5e65bb0481f3f5826291c7cc3e30436ab4314c61" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 2: The probability vector for the excerpt in Table 1 capturing transition probabilities of length 2.", "Figure 1: RST tree for the first sentence of the excerpt in Table 1.", "Table 3: The entity grid for the excerpt in Table 1, where columns are salient entities and rows are sentences. Each cell contains the grammatical relation of the given entity for the given sentence (subject s, object o, another grammatical relation x, or not present -). If an entity occurs multiple times in a sentence, only the highest-ranking relation is recorded.", "Figure 2: The bigram character CNN models", "Figure 3: Two variants for creating sequences of grammatical relation transitions in an entity grid.", "Table 4: Statistics for datasets.", "Table 5: Accuracy for pairwise author classification on the novel-9 dataset, using either a dumb baseline, an SVM with and without discourse to replicate F15, or a bigram-character CNN (CNN2) with and without discourse.", "Table 6: Macro-averaged F1 score for multi-class author classification on the novel-9 dataset, using either no discourse (None), grammatical relations (GR), or RST relations (RST). These experiments additionally include the Discourse Embedding (DE) models for GR and RST.", "Figure 4: Macro-averaged F1 score for multi-class author classification on the novel-9 dataset in varied chunk sizes.", "Table 7: Macro-averaged F1 score for multi-class author classification on the large datasets, using either no discourse (None), grammatical relations (GR), or RST relations (RST).", "Table 8: Nearest neighbors of example embeddings with t-SNE clustering (top 5)" ], "file": [ "2-Table2-1.png", "3-Figure1-1.png", "3-Table3-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "5-Table4-1.png", "6-Table5-1.png", "6-Table6-1.png", "7-Figure4-1.png", "7-Table7-1.png", "8-Table8-1.png" ] }
[ "How are discourse embeddings analyzed?", "How are discourse features incorporated into the model?", "What discourse features are used?" ]
[ [ "1709.02271-Analysis-4" ], [ "1709.02271-Models-3" ], [ "1709.02271-Models-3" ] ]
[ "They perform t-SNE clustering to analyze discourse embeddings", "They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer.", "Entity grid with grammatical relations and RST discourse relations." ]
472
1807.08204
Towards Neural Theorem Proving at Scale
Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover (NTP) model proposed by Rockt{\"{a}}schel and Riedel (2017), a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. For answering a given query, this model needs to consider all possible proof paths, and then aggregate results - this quickly becomes infeasible even for small Knowledge Bases (KBs). We observe that we can accurately approximate the inference process in this model by considering only proof paths associated with the highest proof scores. This enables inference and learning on previously impracticable KBs.
{ "paragraphs": [ [ "Recent advancements in deep learning intensified the long-standing interests in integrating symbolic reasoning with connectionist models BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The attraction of said integration stems from the complementing properties of these systems. Symbolic reasoning models offer interpretability, efficient generalisation from a small number of examples, and the ability to leverage knowledge provided by an expert. However, these systems are unable to handle ambiguous and noisy high-dimensional data such as sensory inputs BIBREF5 . On the other hand, representation learning models exhibit robustness to noise and ambiguity, can learn task-specific representations, and achieve state-of-the-art results on a wide variety of tasks BIBREF6 . However, being universal function approximators, these models require vast amounts of training data and are treated as non-interpretable black boxes.", "One way of integrating the symbolic and sub-symbolic models is by continuously relaxing discrete operations and implementing them in a connectionist framework. Recent approaches in this direction focused on learning algorithmic behaviour without the explicit symbolic representations of a program BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , and consequently with it BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . In the inductive logic programming setting, two new models, NTP BIBREF0 and Differentiable Inductive Logic Programming ( $\\partial $ ILP) BIBREF16 successfully combined the interpretability and data efficiency of a logic programming system with the expressiveness and robustness of neural networks.", "In this paper, we focus on the NTP model proposed by BIBREF0 . Akin to recent neural-symbolic models, NTP rely on a continuous relaxation of a discrete algorithm, operating over the sub-symbolic representations. In this case, the algorithm is an analogue to Prolog's backward chaining with a relaxed unification operator. The backward chaining algorithm constructs neural networks, which model continuously relaxed proof paths using sub-symbolic representations. These representations are learned end-to-end by maximising the proof scores of facts in the KB, while minimising the score of facts not in the KB, in a link prediction setting BIBREF17 . However, while the symbolic unification checks whether two terms can represent the same structure, the relaxed unification measures the similarity between their sub-symbolic representations.", "This continuous relaxation is at the crux of NTP' inability to scale to large datasets. During both training and inference, NTP need to compute all possible proof trees needed for proving a query, relying on the continuous unification of the query with all the rules and facts in the KB. This procedure quickly becomes infeasible for large datasets, as the number of nodes of the resulting computation graph grows exponentially.", "Our insight is that we can radically reduce the computational complexity of inference and learning by generating only the most promising proof paths. In particular, we show that the problem of finding the facts in the KB that best explain a query can be reduced to a $k$ -nearest neighbour problem, for which efficient exact and approximate solutions exist BIBREF18 . This enables us to apply NTP to previously unreachable real-world datasets, such as WordNet." ], [ "In NTP, the neural network structure is built recursively, and its construction is defined in terms of modules similarly to dynamic neural module networks BIBREF19 . Each module, given a goal, a KB, and a current proof state as inputs, produces a list of new proof states, where the proof states are neural networks representing partial proof success scores.", "Unification Module. In backward chaining, unification between two atoms is used for checking whether they can represent the same structure. In discrete unification, non-variable symbols are checked for equality, and the proof fails if the symbols differ. In NTP, rather than comparing symbols, their embedding representations are compared by means of a RBF kernel. This allows matching different symbols with similar semantics, such as matching relations like ${grandFatherOf}$ and ${grandpaOf}$ . Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows:", " 1. unify(, , ) =", "2. unify(, G, ) =", "3. unify(H, , ) =", "4. unify(h::H, g::G, ) = unify(H,G,')", " with ' = (', ') where:", " '= {ll {h/g} if hV", "{g/h} if gV, hV", " otherwise }", "'= ( , { ll k(h:, g:) if hV, gV", "1 otherwise } )", "where $_{h:}$ and $_{g:}$ denote the embedding representations of $h$ and $g$ , respectively.", "OR Module. This module attempts to apply rules in a KB. The name of this module stems from the fact that a KB can be seen as a large disjunction of rules and facts. In backward chaining reasoning systems, the OR module is used for unifying a goal with all facts and rules in a KB: if the goal unifies with the head of the rule, then a series of goals is derived from the body of such a rule. In NTP, we calculate the similarity between the rule and the facts via the unify operator. Upon calculating the continuous unification scores, OR calls AND to prove all sub-goals in the body of the rule.", " or(G, d, ) = ' | ' and(B, d, unify(H, G, )),", " H :– B ", "AND Module. This module is used for proving a conjunction of sub-goals derived from a rule body. It first applies substitutions to the first atom, which is afterwards proven by calling the OR module. Remaining sub-goals are proven by recursively calling the AND module.", " 1. and(_, _, ) =", "2. and(_, 0, _) =", "3. and(, _, ) =", "4. and(G:G, d, ) = ” | ”and(G, d, '),", " ' or(substitute(G, ), d-1, ) ", "For further details on NTPs and the particular implementation of these modules, see BIBREF0 ", "After building all the proof states, NTPs define the final success score of proving a query as an $$ over all the generated valid proof scores (neural networks).", "Assume a KB $\\mathcal {K}$ , composed of $|\\mathcal {K}|$ facts and no rules, for brevity. Note that $|\\mathcal {K}|$ can be impractical within the scope of NTP. For instance, Freebase BIBREF20 is composed of approximately 637 million facts, while YAGO3 BIBREF21 is composed by approximately 9 million facts. Given a query $g \\triangleq [{grandpaOf}, {abe}, {bart}]$ , NTP compares its embedding representation – given by the embedding vectors of ${grandpaOf}$ , ${abe}$ , and ${bart}$ – with the representation of each of the $|\\mathcal {K}|$ facts.", "The resulting proof score of $g$ is given by: ", "$$ \n\\begin{aligned}\n\\max _{f \\in \\mathcal {K}} & \\; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\\emptyset , )) \\\\\n& = \\max _{f \\in \\mathcal {K}} \\; \\min \\big \\lbrace \n,\n\\operatorname{k}(_{\\scriptsize {grandpaOf}:}, _{f_{p}:}),\\\\\n&\\qquad \\qquad \\qquad \\operatorname{k}(_{{abe}:}, _{f_{s}:}),\n\\operatorname{k}(_{{bart}:}, _{f_{o}:})\n\\big \\rbrace ,\n\\end{aligned}$$ (Eq. 3) ", "where $f \\triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\\operatorname{k}({}\\cdot {}, {}\\cdot {})$ denotes the RBF kernel. Note that the maximum proof score is given by the fact $f \\in \\mathcal {K}$ that maximises the similarity between its components and the goal $\\mathcal {K}$0 : solving the maximisation problem in eq:inference can be equivalently stated as a nearest neighbour search problem. In this work, we use ANNS during the forward pass for considering only the most promising proof paths during the construction of the neural network." ], [ "From ex:inference, we can see that the inference problem can be reduced to a nearest neighbour search problem. Given a query $g$ , the problem is finding the fact(s) in $\\mathcal {K}$ that maximise the unification score. This represents a computational bottleneck, since it is very costly to find the exact nearest neighbour in high-dimensional Euclidean spaces, due to the curse of dimensionality BIBREF22 . Exact methods are rarely more efficient than brute-force linear scan methods when the dimensionality is high BIBREF23 , BIBREF24 . A practical solution consists in ANNS algorithms, which relax the condition of the exact search by allowing a small number of mistakes. Several families of ANNS algorithms exist, such as LSH BIBREF25 , PQ BIBREF26 , and PG BIBREF27 . In this work we use HNSW BIBREF24 , BIBREF28 , a graph-based incremental ANNS structure which can offer much better logarithmic complexity scaling in comparison with other approaches." ], [ "Many machine learning methods rely on efficient nearest neighbour search for solving specific sub-problems. Given the computational complexity of nearest neighbour search, approximate methods, driven by advanced index structures, hash or even graph-based approaches are used to speed up the bottleneck of costly comparison. ANNS algorithms have been used to speed up various sorts of machine learning models, including mixture model clustering BIBREF29 , case-based reasoning BIBREF30 to Gaussian process regression BIBREF31 , among others. Similarly to this work, BIBREF32 also rely on approximate nearest neighbours to speed up Memory-Augmented neural networks. Similarly to our work, they apply ANNS to query the external memory (in our case the KB memory) for $k$ closest words. They present drastic savings in speed and memory usage. Though as of this moment, our speed savings are not as drastic, the memory savings we achieve are sufficient so that we can train on WordNet, a dataset previously considered out of reach of NTP." ], [ "We compared results obtained by our model, which we refer to as NTP 2.0, with those obtained by the original NTP proposed by BIBREF0 . Results on several smaller datasets – namely Countries, Nations, Kinship, and UMLS – are shown in tab:results. When unifying goals with facts in the KB, for each goal, we use ANNS for retrieving the $k$ most similar (in embedding space) facts, and use those for computing the final proof scores. We report results for $k = 1$ , as we did not notice sensible differences for $k \\in \\lbrace 2, 5, 10 \\rbrace $ . However, we noticed sensible improvements in the case of Countries, and an overall decrease in performance in UMLS. A possible explanation is that ANNS (with $k = 1$ ), due to its inherently approximate nature, does not always retrieve the closest fact(s) exactly. This behaviour may be a problem in some datasets where exact nearest neighbour search is crucial for correctly answering queries. We also evaluated NTP 2.0 on WordNet BIBREF33 , a KB encoding lexical knowledge about the English language. In particular, we use the WordNet used by BIBREF34 for their experiments. This dataset is significantly larger than the other datasets used by BIBREF0 – it is composed by 38.696 entities, 11 relations, and the training set is composed by 112,581 facts. In WordNet, the accuracies on the validation and test sets were 65.29% and 65.72%, respectively – which is on par with the Distance Model, a Neural Link Predictor discussed by BIBREF34 , which achieves a test accuracy of 68.3%. However, we did not consider a full hyper-parameter sweep, and did not regularise the model using Neural Link Predictors, which sensibly improves NTP' predictive accuracy BIBREF0 . A subset of the induced rules is shown in tab:rules." ], [ "We proposed a way to sensibly scale up NTP by reducing parts of their inference steps to ANNS problems, for which very efficient and scalable solutions exist in the literature." ] ], "section_name": [ "Introduction", "Background", "Nearest Neighbourhood Search", "Related Work", "Experiments", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "38422bb256daf719d511514a73c66a26a115e80c" ], "answer": [ { "evidence": [ "Unification Module. In backward chaining, unification between two atoms is used for checking whether they can represent the same structure. In discrete unification, non-variable symbols are checked for equality, and the proof fails if the symbols differ. In NTP, rather than comparing symbols, their embedding representations are compared by means of a RBF kernel. This allows matching different symbols with similar semantics, such as matching relations like ${grandFatherOf}$ and ${grandpaOf}$ . Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows:", "The resulting proof score of $g$ is given by:", "$$ \\begin{aligned} \\max _{f \\in \\mathcal {K}} & \\; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\\emptyset , )) \\\\ & = \\max _{f \\in \\mathcal {K}} \\; \\min \\big \\lbrace , \\operatorname{k}(_{\\scriptsize {grandpaOf}:}, _{f_{p}:}),\\\\ &\\qquad \\qquad \\qquad \\operatorname{k}(_{{abe}:}, _{f_{s}:}), \\operatorname{k}(_{{bart}:}, _{f_{o}:}) \\big \\rbrace , \\end{aligned}$$ (Eq. 3)", "where $f \\triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\\operatorname{k}({}\\cdot {}, {}\\cdot {})$ denotes the RBF kernel. Note that the maximum proof score is given by the fact $f \\in \\mathcal {K}$ that maximises the similarity between its components and the goal $\\mathcal {K}$0 : solving the maximisation problem in eq:inference can be equivalently stated as a nearest neighbour search problem. In this work, we use ANNS during the forward pass for considering only the most promising proof paths during the construction of the neural network." ], "extractive_spans": [ "'= ( , { ll k(h:, g:) if hV, gV\n\n1 otherwise } )\n\nwhere $_{h:}$ and $_{g:}$ denote the embedding representations of $h$ and $g$ , respectively." ], "free_form_answer": "", "highlighted_evidence": [ "Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows:", "The resulting proof score of $g$ is given by:\n\n$$ \\begin{aligned} \\max _{f \\in \\mathcal {K}} & \\; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\\emptyset , )) \\\\ & = \\max _{f \\in \\mathcal {K}} \\; \\min \\big \\lbrace , \\operatorname{k}(_{\\scriptsize {grandpaOf}:}, _{f_{p}:}),\\\\ &\\qquad \\qquad \\qquad \\operatorname{k}(_{{abe}:}, _{f_{s}:}), \\operatorname{k}(_{{bart}:}, _{f_{o}:}) \\big \\rbrace , \\end{aligned}$$ (Eq. 3)\n\nwhere $f \\triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\\operatorname{k}({}\\cdot {}, {}\\cdot {})$ denotes the RBF kernel." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] }, { "annotation_id": [ "42a5795acc2d8246ff67dce657c9c026ad6ba9db" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 1. A visual depiction of the NTP’ recursive computation graph construction, applied to a toy KB (top left). Dash-separated rectangles denote proof states (left: substitutions, right: proof score -generating neural network). All the non-FAIL proof states are aggregated to obtain the final proof success (depicted in Figure 2). Colours and indices on arrows correspond to the respective KB rule application." ], "extractive_spans": [], "free_form_answer": "A sequence of logical statements represented in a computational graph", "highlighted_evidence": [ "FLOAT SELECTED: Figure 1. A visual depiction of the NTP’ recursive computation graph construction, applied to a toy KB (top left). Dash-separated rectangles denote proof states (left: substitutions, right: proof score -generating neural network). All the non-FAIL proof states are aggregated to obtain the final proof success (depicted in Figure 2). Colours and indices on arrows correspond to the respective KB rule application." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] } ], "nlp_background": [ "two", "two" ], "paper_read": [ "no", "no" ], "question": [ "How are proof scores calculated?", "What are proof paths?" ], "question_id": [ "848ab388703c24faad79d83d254e4fd88ab27e2a", "68794289ed6078b49760dc5fdf88618290e94993" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1. A visual depiction of the NTP’ recursive computation graph construction, applied to a toy KB (top left). Dash-separated rectangles denote proof states (left: substitutions, right: proof score -generating neural network). All the non-FAIL proof states are aggregated to obtain the final proof success (depicted in Figure 2). Colours and indices on arrows correspond to the respective KB rule application.", "Figure 2. Depiction of the proof aggregation for the computation graph presented in Figure 1. Proof states resulting from the computation graph construction are all aggregated to obtain the final success score of proving a query.", "Table 1. AUC-PR results on Countries and MRR and HITS@m on Kinship, Nations, and UMLS.", "Table 2. Rules induced on WordNet, with a confidence above 0.5." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png" ] }
[ "What are proof paths?" ]
[ [ "1807.08204-2-Figure1-1.png" ] ]
[ "A sequence of logical statements represented in a computational graph" ]
473
1704.08960
Neural Word Segmentation with Rich Pretraining
Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks.
{ "paragraphs": [ [ "There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models.", "With respect to non-sparse representation, character embeddings have been exploited as a foundation of neural word segmentors. They serve to reduce sparsity of character ngrams, allowing, for example, “猫(cat) 躺(lie) 在(in) 墙角(corner)” to be connected with “狗(dog) 蹲(sit) 在(in) 墙角(corner)” BIBREF0 , which is infeasible by using sparse one-hot character features. In addition to character embeddings, distributed representations of character bigrams BIBREF6 , BIBREF1 and words BIBREF2 , BIBREF5 have also been shown to improve segmentation accuracies.", "With respect to non-linear modeling power, various network structures have been exploited to represent contexts for segmentation disambiguation, including multi-layer perceptrons on five-character windows BIBREF0 , BIBREF6 , BIBREF1 , BIBREF7 , as well as LSTMs on characters BIBREF3 , BIBREF8 and words BIBREF2 , BIBREF4 , BIBREF5 . For structured learning and inference, CRF has been used for character sequence labelling models BIBREF1 , BIBREF3 and structural beam search has been used for word-based segmentors BIBREF4 , BIBREF5 .", "Previous research has shown that segmentation accuracies can be improved by pretraining character and word embeddings over large Chinese texts, which is consistent with findings on other NLP tasks, such as parsing BIBREF9 . Pretraining can be regarded as one way of leveraging external resources to improve accuracies, which is practically highly useful and has become a standard practice in neural NLP. On the other hand, statistical segmentation research has exploited raw texts for semi-supervised learning, by collecting clues from raw texts more thoroughly such as mutual information and punctuation BIBREF10 , BIBREF11 , and making use of self-predictions BIBREF12 , BIBREF13 . It has also utilised heterogenous annotations such as POS BIBREF14 , BIBREF15 and segmentation under different standards BIBREF16 . To our knowledge, such rich external information has not been systematically investigated for neural segmentation.", "We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor.", "Results on 6 different benchmarks show that our method outperforms the best statistical and neural segmentation models consistently, giving the best reported results on 5 datasets in different domains and genres. Our implementation is based on LibN3L BIBREF20 . Code and models can be downloaded from http://gitHub.com/jiesutd/RichWordSegmentor" ], [ "Work on statistical word segmentation dates back to the 1990s BIBREF21 . State-of-the-art approaches include character sequence labeling models BIBREF22 using CRFs BIBREF23 , BIBREF24 and max-margin structured models leveraging word features BIBREF25 , BIBREF26 , BIBREF27 . Semi-supervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF28 . Our work belongs to recent neural word segmentation.", "To our knowledge, there has been no work in the literature systematically investigating rich external resources for neural word segmentation training. Closest in spirit to our work, BIBREF11 empirically studied the use of various external resources for enhancing a statistical segmentor, including character mutual information, access variety information, punctuation and other statistical information. Their baseline is similar to ours in the sense that both character and word contexts are considered. On the other hand, their model is statistical while ours is neural. Consequently, they integrate external knowledge as features, while we integrate it by shared network parameters. Our results show a similar degree of error reduction compared to theirs by using external data.", "Our model inherits from previous findings on context representations, such as character windows BIBREF6 , BIBREF1 , BIBREF7 and LSTMs BIBREF3 , BIBREF8 . Similar to BIBREF5 and BIBREF4 , we use word context on top of character context. However, words play a relatively less important role in our model, and we find that word LSTM, which has been used by all previous neural segmentation work, is unnecessary for our model. Our model is conceptually simpler and more modularised compared with BIBREF5 and BIBREF4 , allowing a central sub module, namely a five-character context window, to be pretrained." ], [ "Our segmentor works incrementally from left to right, as the example shown in Table TABREF1 . At each step, the state consists of a sequence of words that have been fully recognized, denoted as INLINEFORM0 , a current partially recognized word INLINEFORM1 , and a sequence of next incoming characters, denoted as INLINEFORM2 , as shown in Figure FIGREF4 . Given an input sentence, INLINEFORM3 and INLINEFORM4 are initialized to INLINEFORM5 and INLINEFORM6 , respectively, and INLINEFORM7 contains all the input characters. At each step, a decision is made on INLINEFORM8 , either appending it as a part of INLINEFORM9 , or seperating it as the beginning of a new word. The incremental process repeats until INLINEFORM10 is empty and INLINEFORM11 is null again ( INLINEFORM12 , INLINEFORM13 ). Formally, the process can be regarded as a state-transition process, where a state is a tuple INLINEFORM14 , and the transition actions include Sep (seperate) and App (append), as shown by the deduction system in Figure FIGREF7 .", "In the figure, INLINEFORM0 denotes the score of a state, given by a neural network model. The score of the initial state (i.e. axiom) is 0, and the score of a non-axiom state is the sum of scores of all incremental decisions resulting in the state. Similar to BIBREF5 and BIBREF4 , our model is a global structural model, using the overall score to disambiguate states, which correspond to sequences of inter-dependent transition actions.", "Different from previous work, the structure of our scoring network is shown in Figure FIGREF4 . It consists of three main layers. On the bottom is a representation layer, which derives dense representations INLINEFORM0 and INLINEFORM1 for INLINEFORM2 and INLINEFORM3 , respectively. We compare various distributed representations and neural network structures for learning INLINEFORM4 and INLINEFORM5 , detailed in Section SECREF8 . On top of the representation layer, we use a hidden layer to merge INLINEFORM6 and INLINEFORM7 into a single vector DISPLAYFORM0 ", "The hidden feature vector INLINEFORM0 is used to represent the state INLINEFORM1 , for calculating the scores of the next action. In particular, a linear output layer with two nodes is employed: DISPLAYFORM0 ", "The first and second node of INLINEFORM0 represent the scores of Sep and App given INLINEFORM1 , namely INLINEFORM2 , INLINEFORM3 respectively." ], [ "Characters. We investigate two different approaches to encode incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF22 , BIBREF1 , using five-character window INLINEFORM0 to represent incoming characters. Shown in Figure FIGREF13 , a multi-layer perceptron (MLP) is employed to derive a five-character window vector INLINEFORM1 from single-character vector representations INLINEFORM2 . DISPLAYFORM0 ", "For the latter, we follow recent work BIBREF3 , BIBREF5 , using a bi-directional LSTM to encode input character sequence. In particular, the bi-directional LSTM hidden vector INLINEFORM0 of the next incoming character INLINEFORM1 is used to represent the coming characters INLINEFORM2 given a state. Intuitively, a five-character window provides a local context from which the meaning of the middle character can be better disambiguated. LSTM, on the other hand, captures larger contexts, which can contain more useful clues for dismbiguation but also irrelevant information. It is therefore interesting to investigate a combination of their strengths, by first deriving a locally-disambiguated version of INLINEFORM3 , and then feed it to LSTM for a globally disambiguated representation.", "Now with regard to the single-character vector representation INLINEFORM0 , we follow previous work and consider both character embedding INLINEFORM1 and character-bigram embedding INLINEFORM2 , investigating the effect of each on the accuracies. When both INLINEFORM3 and INLINEFORM4 are utilized, the concatenated vector is taken as INLINEFORM5 .", "Partial Word. We take a very simple approach to representing the partial word INLINEFORM0 , by using the embedding vectors of its first and last characters, as well as the embedding of its length. Length embeddings are randomly initialized and then tuned in model training. INLINEFORM1 has relatively less influence on the empirical segmentation accuracies. DISPLAYFORM0 ", "Word. Similar to the character case, we investigate two different approaches to encoding incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF25 , BIBREF27 , using the two-word window INLINEFORM0 to represent recognized words. A hidden layer is employed to derive a two-word vector INLINEFORM1 from single word embeddings INLINEFORM2 and INLINEFORM3 . DISPLAYFORM0 ", "For the latter, we follow BIBREF5 and BIBREF4 , using an uni-directional LSTM on words that have been recognized." ], [ "Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.", "Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .", "Punctuation can serve as a type of explicit mark-up BIBREF30 , indicating that the two characters on its left and right belong to two different words. We leverage this source of information by extracting character five-grams excluding punctuation from raw sentences, using them as inputs to classify whether there is punctuation before middle character. Denoting the resulting five character window as INLINEFORM0 , the MLP in Figure FIGREF13 is used to derive its representation INLINEFORM1 , which is then fed to a softmax layer for binary classification: DISPLAYFORM0 ", "Here INLINEFORM0 indicates the probability of a punctuation mark existing before INLINEFORM1 . Standard backpropagation training of the MLP in Figure FIGREF13 can be done jointly with the training of INLINEFORM2 and INLINEFORM3 . After such training, the embedding INLINEFORM4 and MLP values can be used to initialize the corresponding parameters for INLINEFORM5 in the main segmentor, before its training.", "Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0 ", "Here INLINEFORM0 and INLINEFORM1 are model parameters. Training can be done in the same way as training with punctuation.", "Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0 ", "POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0 ", "Multitask Learning. While each type of external training data can offer one source of segmentation information, different external data can be complimentary to each other. We aim to inject all sources of information into the character window representation INLINEFORM0 by using it as a shared representation for different classification tasks. Neural model have been shown capable of doing multi-task learning via parameter sharing BIBREF19 . Shown in Figure FIGREF13 , in our case, the output layer for each task is independent, but the hidden layer INLINEFORM1 and all layers below INLINEFORM2 are shared.", "For training with all sources above, we randomly sample sentences from the Punc./Auto-seg/Heter./POS sources with the ratio of 10/1/1/1, for each sentence in punctuation corpus we take only 2 characters (character before and after the punctuation) as input instances.", "[t] InputInput OutputOutput Parameters: INLINEFORM0 ", "Process:", "agenda INLINEFORM0 INLINEFORM1 ", "j in [0:Len( INLINEFORM0 )] beam = []", " INLINEFORM0 in agenda INLINEFORM1 = Action( INLINEFORM2 , Sep)", "Add( INLINEFORM0 , beam)", " INLINEFORM0 = Action( INLINEFORM1 , App)", "Add( INLINEFORM0 , beam)", " agenda INLINEFORM0 Top(beam, B)", " INLINEFORM0 agenda INLINEFORM1 = BestIn(agenda)", "Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )", "return", " INLINEFORM0 = BestIn(agenda)", "Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )", "return", "Training" ], [ "To train the main segmentor, we adopt the global transition-based learning and beam-search strategy of BIBREF31 . For decoding, standard beam search is used, where the B best partial output hypotheses at each step are maintained in an agenda. Initially, the agenda contains only the start state. At each step, all hypotheses in the agenda are expanded, by applying all possible actions and B highest scored resulting hypotheses are used as the agenda for the next step.", "For training, the same decoding process is applied to each training example INLINEFORM0 . At step INLINEFORM1 , if the gold-standard sequence of transition actions INLINEFORM2 falls out of the agenda, max-margin update is performed by taking the current best hypothesis INLINEFORM3 in the beam as a negative example, and INLINEFORM4 as a positive example. The loss function is DISPLAYFORM0 ", "where INLINEFORM0 is the number of incorrect local decisions in INLINEFORM1 , and INLINEFORM2 controls the score margin.", "The strategy above is early-update BIBREF32 . On the other hand, if the gold-standard hypothesis does not fall out of the agenda until the full sentence has been segmented, a final update is made between the highest scored hypothesis INLINEFORM0 (non-gold standard) in the agenda and the gold-standard INLINEFORM1 , using exactly the same loss function. Pseudocode for the online learning algorithm is shown in Algorithm SECREF14 .", "We use Adagrad BIBREF33 to optimize model parameters, with an initial learning rate INLINEFORM0 . INLINEFORM1 regularization and dropout BIBREF34 on input are used to reduce overfitting, with a INLINEFORM2 weight INLINEFORM3 and a dropout rate INLINEFORM4 . All the parameters in our model are randomly initialized to a value INLINEFORM5 , where INLINEFORM6 BIBREF35 . We fine-tune character and character bigram embeddings, but not word embeddings, acccording to BIBREF5 ." ], [ "Data. We use Chinese Treebank 6.0 (CTB6) BIBREF36 as our main dataset. Training, development and test set splits follow previous work BIBREF37 . In order to verify the robustness of our model, we additionally use SIGHAN 2005 bake-off BIBREF38 and NLPCC 2016 shared task for Weibo segmentation BIBREF39 as test datasets, where the standard splits are used. For pretraining embedding of words, characters and character bigrams, we use Chinese Gigaword (simplified Chinese sections), automatically segmented using ZPar 0.6 off-the-shelf BIBREF25 , the statictics of which are shown in Table TABREF24 .", "For pretraining character representations, we extract punctuation classification data from the Gigaword corpus, and use the word-based ZPar and a standard character-based CRF model BIBREF40 to obtain automatic segmentation results. We compare pretraining using ZPar results only and using results that both segmentors agree on. For heterogenous segmentation corpus and POS data, we use a People's Daily corpus of 5 months. Statistics are listed in Table TABREF24 .", "Evaluation. The standard word precision, recall and F1 measure BIBREF38 are used to evaluate segmentation performances.", "Hyper-parameter Values. We adopt commonly used values for most hyperparameters, but tuned the sizes of hidden layers on the development set. The values are summarized in Table TABREF20 ." ], [ "We perform development experiments to verify the usefulness of various context representations, network configurations and different pretraining methods, respectively.", "The influence of character and word context representations are empirically studied by varying the network structures for INLINEFORM0 and INLINEFORM1 in Figure FIGREF4 , respectively. All the experiments in this section are performed using a beam size of 8.", "Character Context. We fix the word representation INLINEFORM0 to a 2-word window and compare different character context representations. The results are shown in Table TABREF27 , where “no char” represents our model without INLINEFORM1 , “5-char window” represents a five-character window context, “char LSTM” represents character LSTM context and “5-char window + LSTM” represents a combination, detailed in Section SECREF8 . “-char emb” and “-bichar emb” represent the combined window and LSTM context without character and character-bigram information, respectively.", "As can be seen from the table, without character information, the F-score is 84.62%, demonstrating the necessity of character contexts. Using window and LSTM representations, the F-scores increase to 95.41% and 95.51%, respectively. A combination of the two lead to further improvement, showing that local and global character contexts are indeed complementary, as hypothesized in Section SECREF8 . Finally, by removing character and character-bigram embeddings, the F-score decreases to 95.20% and 94.27%, respectively, which suggests that character bigrams are more useful compared to character unigrams. This is likely because they contain more distinct tokens and hence offer a larger parameter space.", "Word Context. The influence of various word contexts are shown in Table TABREF28 . Without using word information, our segmentor gives an F-score of 95.66% on the development data. Using a context of only INLINEFORM0 (1-word window), the F-measure increases to 95.78%. This shows that word contexts are far less important in our model compared to character contexts, and also compared to word contexts in previous word-based segmentors BIBREF5 , BIBREF4 . This is likely due to the difference in our neural network structures, and that we fine-tune both character and character bigram embeddings, which significantly enlarges the adjustable parameter space as compared with BIBREF5 . The fact that word contexts can contribute relatively less than characters in a word is also not surprising in the sense that word-based neural segmentors do not outperform the best character-based models by large margins. Given that character context is what we pretrain, our model relies more heavily on them.", "With both INLINEFORM0 and INLINEFORM1 being used for the context, the F-score further increases to 95.86%, showing that a 2-word window is useful by offering more contextual information. On the other hand, when INLINEFORM2 is also considered, the F-score does not improve further. This is consistent with previous findings of statistical word segmentation BIBREF25 , which adopt a 2-word context. Interestingly, using a word LSTM does not bring further improvements, even when it is combined with a window context. This suggests that global word contexts may not offer crucial additional information compared with local word contexts. Intuitively, words are significantly less polysemous compared with characters, and hence can serve as effective contexts even if used locally, to supplement a more crucial character context.", "We verify the effectiveness of structured learning and inference by measuring the influence of beam size on the baseline segmentor. Figure FIGREF30 shows the F-scores against different numbers of training iterations with beam size 1,2,4,8 and 16, respectively. When the beam size is 1, the inference is local and greedy. As the size of the beam increases, more global structural ambiguities can be resolved since learning is designed to guide search. A contrast between beam sizes 1 and 2 demonstrates the usefulness of structured learning and inference. As the beam size increases, the gain by doubling the beam size decreases. We choose a beam size of 8 for the remaining experiments for a tradeoff between speed and accuracy.", "Table TABREF31 shows the effectiveness of rich pretraining of INLINEFORM0 on the development set. In particular, by using punctuation information, the F-score increases from 95.86% to 96.25%, with a relative error reduction of 9.4%. This is consistent with the observation of BIBREF11 , who show that punctuation is more effective compared with mutual information and access variety as semi-supervised data for a statistical word segmentation model. With automatically-segmented data, heterogenous segmentation and POS information, the F-score increases to 96.26%, 96.27% and 96.22%, respectively, showing the relevance of all information sources to neural segmentation, which is consistent with observations made for statistical word segmentation BIBREF16 , BIBREF12 , BIBREF28 . Finally, by integrating all above information via multi-task learning, the F-score is further improved to 96.48%, with a 15.0% relative error reduction.", "Both our model and BIBREF5 use global learning and beam search, but our network is different. BIBREF5 utilizes the action history with LSTM encoder, while we use partial word rather than action information. Besides, the character and character bigram embeddings are fine-tuned in our model while BIBREF5 set the embeddings fixed during training. We study the F-measure distribution with respect to sentence length on our baseline model, multitask pretraining model and BIBREF5 . In particular, we cluster the sentences in the development dataset into 6 categories based on their length and evaluate their F1-values, respectively. As shown in Figure FIGREF35 , the models give different error distributions, with our models being more robust to the sentence length compared with BIBREF5 . Their model is better on very short sentences, but worse on all other cases. This shows the relative advantages of our model." ], [ "Our final results on CTB6 are shown in Table TABREF38 , which lists the results of several current state-of-the-art methods. Without multitask pretraining, our model gives an F-score of 95.44%, which is higher than the neural segmentor of BIBREF5 , which gives the best accuracies among pure neural segments on this dataset. By using multitask pretraining, the result increases to 96.21%, with a relative error reduction of 16.9%. In comparison, BIBREF11 investigated heterogenous semi-supervised learning on a state-of-the-art statistical model, obtaining a relative error reduction of 13.8%. Our findings show that external data can be as useful for neural segmentation as for statistical segmentation.", "Our final results compare favourably to the best statistical models, including those using semi-supervised learning BIBREF11 , BIBREF12 , and those leveraging joint POS and syntactic information BIBREF37 . In addition, it also outperforms the best neural models, in particular BIBREF5 *, which is a hybrid neural and statistical model, integrating manual discrete features into their word-based neural model. We achieve the best reported F-score on this dataset. To our knowledge, this is the first time a pure neural network model outperforms all existing methods on this dataset, allowing the use of external data . We also evaluate our model pretrained only on punctuation and auto-segmented data, which do not include additional manual labels. The results on CTB test data show the accuracy of 95.8% and 95.7%, respectivley, which are comparable with those statistical semi-supervised methods BIBREF11 , BIBREF12 . They are also among the top performance methods in Table TABREF38 . Compared with discrete semi-supervised methods BIBREF11 , BIBREF12 , our semi-supervised model is free from hand-crafted features.", "In addition to CTB6, which has been the most commonly adopted by recent segmentation research, we additionally evaluate our results on the SIGHAN 2005 bakeoff and Weibo datasets, to examine cross domain robustness. Different state-of-the-art methods for which results are recorded on these datasets are listed in Table TABREF40 . Most neural models reported results only on the PKU and MSR datasets of the bakeoff test sets, which are in simplified Chinese. The AS and CityU corpora are in traditional Chinese, sourced from Taiwan and Hong Kong corpora, respectively. We map them into simplified Chinese before segmentation. The Weibo corpus is in a yet different genre, being social media text. BIBREF41 achieved the best results on this dataset by using a statistical model with features learned using external lexicons, the CTB7 corpus and the People Daily corpus. Similar to Table TABREF38 , our method gives the best accuracies on all corpora except for MSR, where it underperforms the hybrid model of BIBREF5 by 0.2%. To our knowledge, we are the first to report results for a neural segmentor on more than 3 datasets, with competitive results consistently. It verifies that knowledge learned from a certain set of resources can be used to enhance cross-domain robustness in training a neural segmentor for different datasets, which is of practical importance." ], [ "We investigated rich external resources for enhancing neural word segmentation, by building a globally optimised beam-search model that leverages both character and word contexts. Taking each type of external resource as an auxiliary classification task, we use neural multi-task learning to pre-train a set of shared parameters for character contexts. Results show that rich pretraining leads to 15.4% relative error reduction, and our model gives results highly competitive to the best systems on six different benchmarks." ], [ "We thank the anonymous reviewers for their insightful comments and the support of NSFC 61572245. We would like to thank Meishan Zhang for his insightful discussion and assisting coding. Yue Zhang is the corresponding author." ] ], "section_name": [ "Introduction", "Related Work", "Model", "Representation Learning", "Pretraining", "Decoding and Training", "Experimental Settings", "Development Experiments", "Final Results", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "5037fbd30dcf60b0602d223a6912d8e89b273b47" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "3870a411410f7a4827f564d8204005e8088a9d7b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Statistics of external data.", "Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.", "Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .", "Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0", "Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0", "POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0" ], "extractive_spans": [], "free_form_answer": "Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Statistics of external data.", "We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.", "Raw Text.", "Automatically Segmented Text. ", "Heterogenous Training Data.", "POS Data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "eaa48947366449868a0787d03953c813244f75da" ], "answer": [ { "evidence": [ "We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor." ], "extractive_spans": [ "five-character window context" ], "free_form_answer": "", "highlighted_evidence": [ "Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "What is the size of the model?", "What external sources are used?", "What submodules does the model consist of?" ], "question_id": [ "62048ea0aab61abe21fb30d70c4a1bc5fb946137", "25e4dbc7e211a1ebe02ee8dff675b846fb18fdc5", "9893c5f36f9d503678749cb0466eeaa0cfc9413f" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: A transition based word segmentation example.", "Figure 2: Deduction system, where ⊕ denotes string concatenation.", "Figure 1: Overall model.", "Figure 3: Shared character representation.", "Table 2: Hyper-parameter values.", "Table 3: Statistics of external data.", "Table 4: Influence of character contexts.", "Table 5: Influence of word contexts.", "Figure 4: F1 measure against the training epoch.", "Figure 5: F1 measure against the sentence length.", "Table 6: Influence of pretraining.", "Table 8: Main results on other test datasets.", "Table 7: Main results on CTB6." ], "file": [ "2-Table1-1.png", "3-Figure2-1.png", "3-Figure1-1.png", "4-Figure3-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "8-Figure4-1.png", "8-Figure5-1.png", "8-Table6-1.png", "9-Table8-1.png", "9-Table7-1.png" ] }
[ "What external sources are used?" ]
[ [ "1704.08960-Pretraining-1", "1704.08960-6-Table3-1.png", "1704.08960-Pretraining-0" ] ]
[ "Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily" ]
474
2002.05058
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chit-chat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model.
{ "paragraphs": [ [ "Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.", "Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.", "To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides.", "The contribution of this paper is threefold:", "We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set.", "We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches.", "We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem." ], [ "Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.", "Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.", "Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.", "Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.", "Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.", "Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models." ], [ "We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models." ], [ "The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.", "The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.", "We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\\in S_{+}$ and a generated sample $s_{-}\\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.", "One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.", "To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.", "The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6", "where $\\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \\in \\lbrace >,<,\\approx \\rbrace $) for the pair ($x_1$, $x_2$).", "As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable." ], [ "In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.", "Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling." ], [ "We set up experiments in order to answer the following research questions:", "RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models?", "RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better?", "RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping?", "RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?" ], [ "We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog." ], [ "As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.", "Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.", "The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations." ], [ "The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT." ], [ "As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed.", "We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.", "We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\\kappa =0.53$ for directly scoring and $\\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator." ], [ "To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.", "The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample." ], [ "As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model.", "Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation." ], [ "We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.", "The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem." ], [ "We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference." ], [ "To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model:", "w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method.", "w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models.", "w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training.", "w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision).", "w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty.", "w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT.", "We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance." ], [ "In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.", "By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices." ] ], "section_name": [ "Introduction", "Related Work", "Methodology", "Methodology ::: Learning to Compare", "Methodology ::: Skill Rating", "Experiments", "Experiments ::: Experimental Settings ::: Datasets", "Experiments ::: Experimental Settings ::: Compared Models and Metrics", "Experiments ::: Experimental Settings ::: Detail of Parameterized Evaluators", "Experiments ::: Experimental Settings ::: Human Evaluation Procedure", "Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation", "Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation", "Experiments ::: Experimental Designs & Results ::: RQ3&4: Automated Metrics for Model Training", "Experiments ::: Qualitative Analysis", "Experiments ::: Ablation Study", "Discussion and Conclusion" ] }
{ "answers": [ { "annotation_id": [ "38adedaca4171f5cdb062128586f8666bb16d56b" ], "answer": [ { "evidence": [ "The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6", "where $\\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \\in \\lbrace >,<,\\approx \\rbrace $) for the pair ($x_1$, $x_2$)." ], "extractive_spans": [ "human preference annotation is available", "$Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair" ], "free_form_answer": "", "highlighted_evidence": [ "The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6\n\nwhere $\\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \\in \\lbrace >,<,\\approx \\rbrace $) for the pair ($x_1$, $x_2$)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "d449825aa0a1d27531a41dd5ec098834f1fde4a8" ], "answer": [ { "evidence": [ "Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.", "Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.", "Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.", "Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization." ], "extractive_spans": [ "Text Overlap Metrics, including BLEU", "Perplexity", "Parameterized Metrics" ], "free_form_answer": "", "highlighted_evidence": [ "Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.", "Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models.", "Perplexity is commonly used to evaluate the quality of a language model.", "Parameterized Metrics learn a parameterized model to evaluate generated text." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "866e0c1731c425aba356086036f0d38fe8fc58ec" ], "answer": [ { "evidence": [ "The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.", "Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.", "FLOAT SELECTED: Table 1: Sample-level correlation between metrics and human judgments, with p-values shown in brackets.", "FLOAT SELECTED: Table 2: Model-level correlation between metrics and human judgments, with p-values shown in brackets." ], "extractive_spans": [], "free_form_answer": "Pearson correlation to human judgement - proposed vs next best metric\nSample level comparison:\n- Story generation: 0.387 vs 0.148\n- Dialogue: 0.472 vs 0.341\nModel level comparison:\n- Story generation: 0.631 vs 0.302\n- Dialogue: 0.783 vs 0.553", "highlighted_evidence": [ "The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity.", "Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores.", "FLOAT SELECTED: Table 1: Sample-level correlation between metrics and human judgments, with p-values shown in brackets.", "FLOAT SELECTED: Table 2: Model-level correlation between metrics and human judgments, with p-values shown in brackets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "d862334d67ecc306b5e570f5da69bca204db6984" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How they add human prefference annotation to fine-tuning process?", "What previous automated evalution approaches authors mention?", "How much better peformance is achieved in human evaluation when model is trained considering proposed metric?", "Do the authors suggest that proposed metric replace human evaluation on this task?" ], "question_id": [ "5d85d7d4d013293b4405beb4b53fa79ac7c03401", "6dc9960f046ec6bd280a721724458f66d5a9a585", "75b69eef4a38ec16df63d60be9708a3c44a79c56", "7488855f09b97eb6a027212fb7ace1d338f36a2b" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: model architecture of the comparative evaluator, the context is concatenated with generated samples.", "Table 1: Sample-level correlation between metrics and human judgments, with p-values shown in brackets.", "Table 2: Model-level correlation between metrics and human judgments, with p-values shown in brackets.", "Table 3: Performance of different metrics in hyperparameter tuning and earlystop checkpoint selecting.", "Table 4: Examples of comparison results between two generated samples given context.", "Table 5: Model-level correlation between ablated variants and human judgments in the Dailydialog dataset" ], "file": [ "4-Figure1-1.png", "6-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png" ] }
[ "How much better peformance is achieved in human evaluation when model is trained considering proposed metric?" ]
[ [ "2002.05058-Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation-1", "2002.05058-6-Table1-1.png", "2002.05058-Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation-1", "2002.05058-6-Table2-1.png" ] ]
[ "Pearson correlation to human judgement - proposed vs next best metric\nSample level comparison:\n- Story generation: 0.387 vs 0.148\n- Dialogue: 0.472 vs 0.341\nModel level comparison:\n- Story generation: 0.631 vs 0.302\n- Dialogue: 0.783 vs 0.553" ]
475
2002.06675
Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language
Ainu is an unwritten language that has been spoken by Ainu people who are one of the ethnic groups in Japan. It is recognized as critically endangered by UNESCO and archiving and documentation of its language heritage is of paramount importance. Although a considerable amount of voice recordings of Ainu folklore has been produced and accumulated to save their culture, only a quite limited parts of them are transcribed so far. Thus, we started a project of automatic speech recognition (ASR) for the Ainu language in order to contribute to the development of annotated language archives. In this paper, we report speech corpus development and the structure and performance of end-to-end ASR for Ainu. We investigated four modeling units (phone, syllable, word piece, and word) and found that the syllable-based model performed best in terms of both word and phone recognition accuracy, which were about 60% and over 85% respectively in speaker-open condition. Furthermore, word and phone accuracy of 80% and 90% has been achieved in a speaker-closed setting. We also found out that a multilingual ASR training with additional speech corpora of English and Japanese further improves the speaker-open test accuracy.
{ "paragraphs": [ [ "Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue.", "The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project.", "We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages." ], [ "This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language." ], [ "The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet." ], [ "The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”).", "Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below." ], [ "The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step." ], [ "There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet.", "It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum." ], [ "In this section we explain the content of the data sets and how we modified it for our ASR corpus." ], [ "The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2." ], [ "For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below.", "Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information.", "To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1." ], [ "In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem." ], [ "End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model.", "CTC augments the output symbol set with the “blank” symbol `$\\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example:", "The probability of an output sequence $\\mathbf {L}$ for an input acoustic feature sequence $\\mathbf {X}$, where $|\\mathbf {L}| < |\\mathbf {X}|$, is defined as follows.", "$\\mathcal {B}$ is a function to contract the outputs of RNNs, so $\\mathcal {B}^{-1}(\\mathbf {L})$ means the set of symbol sequences which is reduced to $\\mathbf {L}$. The model is trained to maximize (1).", "The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\\mathrm {T}$ and the $l$-th attention weight $\\alpha _{1,l}, ... , \\alpha _{\\mathrm {T},l}$ as shown in (2). Here, $\\mathrm {T}$ is the length of the encoder output.", "The attention weights $\\alpha _{1,l}, ... , \\alpha _{\\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training.", "In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder." ], [ "In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely.", "In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below." ], [ "As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\\langle $wb$\\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3." ], [ "A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure.", "A word with a single letter is unchanged.", "Two consecutive Cs and Vs are given a syllable boundary between them.", "", "R$^*${CC, VV}R$^*$$\\rightarrow $ R$^*${C-C, V-V}R$^*$", "(R $$ {C, V})", "Put a syllable boundary after the segment-initial V if it is following by at least two phones.", "", "VCR$^+$$\\rightarrow $ V-CR$^+$", "Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left.", "", "(CV)$^*${CV, CVC} $\\rightarrow $ (CV-)$^*${CV, CVC}", "In addition, `=' and `$\\langle $wb$\\rangle $' are added as explained in Section 4.2.1. through the model training process.", "This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us." ], [ "The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\\langle $wb$\\rangle $' and other units are often merged to constitute a single piece as seen in Table 3." ], [ "The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\\langle $unk$\\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\\langle $unk$\\rangle $'." ], [ "When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers.", "In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained." ], [ "In this section the setting and results of ASR experiments are described and the results are discussed." ], [ "The ASR experiments were performed in speaker-open condition as well as speaker-closed condition.", "In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted." ], [ "The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds.", "The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder,", "where $\\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31.", "Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\\langle $unk$\\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set.", "For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments." ], [ "Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets.", "The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size.", "The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\\langle $unk$\\rangle $' symbols while other unit models are able to output symbols similar in sound as below.", "In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\\langle $wb$\\rangle $' symbol.)", "WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero.", "The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language." ], [ "In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques." ], [ "The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language." ] ], "section_name": [ "Introduction", "Overview of the Ainu Language", "Overview of the Ainu Language ::: Background", "Overview of the Ainu Language ::: The Ainu Language and its Writing System", "Overview of the Ainu Language ::: Types of Ainu Recordings", "Overview of the Ainu Language ::: Previous Work", "Ainu Speech Corpus", "Ainu Speech Corpus ::: Numbers of Speakers and Episodes", "Ainu Speech Corpus ::: Data Annotation", "End-to-end Speech Recognition", "End-to-end Speech Recognition ::: End-to-end Modeling", "End-to-end Speech Recognition ::: Modeling Units", "End-to-end Speech Recognition ::: Modeling Units ::: Phone", "End-to-end Speech Recognition ::: Modeling Units ::: Syllable", "End-to-end Speech Recognition ::: Modeling Units ::: Word Piece", "End-to-end Speech Recognition ::: Modeling Units ::: Word", "End-to-end Speech Recognition ::: Multilingual Training", "Experimental Evaluation", "Experimental Evaluation ::: Data Setup", "Experimental Evaluation ::: Experimental Setting", "Experimental Evaluation ::: Results", "Summary", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "aebcd5a6e7d1e859e2ba71199ef736b4aca05345" ], "answer": [ { "evidence": [ "The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language." ], "extractive_spans": [ "relative WER improvement of 10%." ], "free_form_answer": "", "highlighted_evidence": [ "The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "a5bf56882bace8dc2767b2fce9f7dc27739c28e4" ], "answer": [ { "evidence": [ "The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2.", "FLOAT SELECTED: Table 1: Speaker-wise details of the corpus" ], "extractive_spans": [], "free_form_answer": "Transcribed data is available for duration of 38h 54m 38s for 8 speakers.", "highlighted_evidence": [ "The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker.", "FLOAT SELECTED: Table 1: Speaker-wise details of the corpus" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "397177c2540efde183fc1c28d621e466d75e7bd2" ], "answer": [ { "evidence": [ "In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted." ], "extractive_spans": [ "In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets.", "In the speaker-open condition, all the data except for the test speaker's were used for training" ], "free_form_answer": "", "highlighted_evidence": [ "In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no" ], "question": [ "How big are improvements with multilingual ASR training vs single language training?", "How much transcribed data is available for for Ainu language?", "What is the difference between speaker-open and speaker-closed setting?" ], "question_id": [ "526ae24fa861d52536b66bcc2d2ddfce483511d6", "8a5254ca726a2914214a4c0b6b42811a007ecfc6", "3c0d66f9e55a89d13187da7b7128666df9a742ce" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Speaker-wise details of the corpus", "Table 2: Text excerpted from the prose tale ‘The Boy Who Became Porosir God’ spoken by KM.", "Figure 1: The attention model with CTC auxiliary task.", "Table 3: Examples of four modeling units.", "Figure 2: The architecture of the multilingual learning with two corpora. ‘FC’ and ‘CE’ means ‘fully connected’ and ‘cross-entropy’ respectively.", "Table 4: ASR performance for each speaker and modeling unit. The lowest error rates for each unit are highlighted.", "Table 5: Results of multilingual training." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Figure1-1.png", "4-Table3-1.png", "4-Figure2-1.png", "5-Table4-1.png", "6-Table5-1.png" ] }
[ "How much transcribed data is available for for Ainu language?" ]
[ [ "2002.06675-2-Table1-1.png", "2002.06675-Ainu Speech Corpus ::: Numbers of Speakers and Episodes-0" ] ]
[ "Transcribed data is available for duration of 38h 54m 38s for 8 speakers." ]
478
1909.08041
Revealing the Importance of Semantic Retrieval for Machine Reading at Scale
Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at: this https URL
{ "paragraphs": [ [ "Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.", "Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks.", "Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems).", "We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification." ], [ "Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.", "Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.", "Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding." ], [ "In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.", "To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \\mathbf {K})$ to an output tuple $(\\hat{y}, \\mathbf {S})$ where $q$ indicates the input query, $\\mathbf {K}$ is the textual KB, $\\hat{y}$ is the output prediction, and $\\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\\mathbf {K}$ is Wikipedia.", "The system procedure is listed below:", "(1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\\mathbf {P_I}$ that can cover the information as much as possible ($\\mathbf {P_I} \\subset \\mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing.", "(2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\\mathbf {P_I}$ will be narrowed to a new set $\\mathbf {P_N}$ ($\\mathbf {P_N} \\subset \\mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval.", "(3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\\mathbf {S} \\subset \\mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\\mathbf {E}$.", "(4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\\mathbf {S}$ and the query, obtaining the final output $\\hat{y}$.", "In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6." ], [ "Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.", "Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:", "We applied an affine layer and sigmoid activation on the last layer output of the [$\\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:", "where $\\hat{p}_i$ is the output of the model, $\\mathbf {T}^{p/s}_{pos}$ is the positive set and $\\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples.", "QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\\mathit {yes}$\" and “$\\mathit {no}$\" tokens between [$\\mathit {CLS}$] and the $Query$ as:", "where the supervision was given to the second or the third token when the answer is “yes\" or “no\", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as:", "where $\\hat{y}^s_i$ and $\\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference.", "Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\\mathcal {J}_{ver} = -\\sum _{i} \\mathbf {y}_i \\cdot \\log (\\hat{\\mathbf {y}}_i)$, where $\\hat{\\mathbf {y}}_i \\in \\mathbf {R^3}$ denotes the model's output for the three verification labels, and $\\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27).", "It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS." ], [ "MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification." ], [ "HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval.", "FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification.", "As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks." ], [ "Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \\cdot P_s; R_j = R_a \\cdot R_s; F_j = \\frac{2P_j \\cdot R_j}{P_j + R_j}; \\text{EM}_j = \\text{EM}_a \\cdot \\text{EM}_s$, where $P$, $R$, and $\\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts.", "For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set." ], [ "We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .", "As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\\sim $8 absolute points increase on EM and $\\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.", "Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\\sim $4 and $\\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.", "Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters." ], [ "Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\\mathbf {P_N}$ and set $\\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules." ], [ "To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA." ], [ "Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.", "Next, the removal of sentence-level retrieval module induces a $\\sim $2 point drop on EM and F1 score in the QA task, and a $\\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label." ], [ "To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance." ], [ "We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph." ], [ "Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER.", "Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall.", "Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix." ], [ "We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%." ], [ "Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module.", "Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval." ], [ "We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting." ], [ "We thank the reviewers for their helpful comments and Yicheng Wang for useful comments. This work was supported by awards from Verisk, Google, Facebook, Salesforce, and Adobe (plus Amazon and Google GPU cloud credits). The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency." ], [ "The hyper-parameters were chosen based on the performance of the system on the dev set. The hyper-parameters search space is shown in Table TABREF27 and the learning rate was set to $10^{-5}$ in all experiments." ], [ "We used the same key-word matching method in nie2019combining to get a candidate set for each query. We also used TF-IDF BIBREF20 method to get top-5 related documents for each query. Then, the two sets were combined to get final term-based retrieval set for FEVER. The mean and standard deviation of the number of the retrieved paragraph in the merged set were 8.06 and 4.88." ], [ "We first used the same procedure on FEVER to get an initial candidate set for each query in HotpotQA. Because HotpotQA requires at least 2-hop reasoning for each query, we then extract all the hyperlinked documents from the retrieved documents in the initial candidate set, rank them with TF-IDF BIBREF20 score and then select top-5 most related documents and add them to the candidate set. This gives the final term-based retrieval set for HotpotQA. The mean and standard deviation of the number of the retrieved paragraph for each query in HotpotQA were 39.43 and 16.05." ], [ "The results of sentence-level retrieval and downstream QA with different values of $h_s$ on HotpotQA are in Table TABREF28.", "The results of sentence-level retrieval and downstream verification with different values of $h_s$ on FEVER are in Table TABREF34.", "The results of sentence-level retrieval and downstream QA with different values of $k_p$ on HotpotQA are in Table TABREF35." ], [ "We further provide examples, case study and error analysis for the full pipeline system. The examples are shown from Tables TABREF37, TABREF38, TABREF39, TABREF40, TABREF41. The examples show high diversity on the semantic level and the error occurs often due to the system's failure of extracting precise (either wrong, surplus or insufficient) information from KB." ] ], "section_name": [ "Introduction", "Related Work", "Method", "Method ::: Modeling and Training", "Experimental Setup", "Experimental Setup ::: Tasks and Datasets", "Experimental Setup ::: Metrics", "Results on Benchmarks", "Analysis and Ablations", "Analysis and Ablations ::: Ablation Studies ::: Setups:", "Analysis and Ablations ::: Ablation Studies ::: Results:", "Analysis and Ablations ::: Sub-Module Change Analysis", "Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Paragraph-level Retrieval", "Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Sentence-level Retrieval", "Analysis and Ablations ::: Answer Breakdown", "Analysis and Ablations ::: Examples", "Conclusion", "Acknowledgments", "Training Details", "Term-Based Retrieval Details ::: FEVER", "Term-Based Retrieval Details ::: HotpotQA", "Detailed Results", "Examples and Case Study" ] }
{ "answers": [ { "annotation_id": [ "82e0f7de747f8e064b5c5ad48c31e5cb6c75942e" ], "answer": [ { "evidence": [ "We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .", "As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\\sim $8 absolute points increase on EM and $\\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.", "Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\\sim $4 and $\\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.", "FLOAT SELECTED: Table 1: Results of systems on HOTPOTQA.", "FLOAT SELECTED: Table 2: Performance of systems on FEVER. “F1” indicates the sentence-level evidence F1 score. “LA” indicates Label Acc. without considering the evidence prediction. “FS”=FEVER Score (Thorne et al., 2018)" ], "extractive_spans": [], "free_form_answer": "HotspotQA: Yang, Ding, Muppet\nFever: Hanselowski, Yoneda, Nie", "highlighted_evidence": [ "We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .", "As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. ", "Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9.", "FLOAT SELECTED: Table 1: Results of systems on HOTPOTQA.", "FLOAT SELECTED: Table 2: Performance of systems on FEVER. “F1” indicates the sentence-level evidence F1 score. “LA” indicates Label Acc. without considering the evidence prediction. “FS”=FEVER Score (Thorne et al., 2018)" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "69e99741f00ad1c5aebaf94f57c4fc6177603818" ], "answer": [ { "evidence": [ "Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:", "We applied an affine layer and sigmoid activation on the last layer output of the [$\\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:", "where $\\hat{p}_i$ is the output of the model, $\\mathbf {T}^{p/s}_{pos}$ is the positive set and $\\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples." ], "extractive_spans": [ "We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss." ], "free_form_answer": "", "highlighted_evidence": [ "Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:\n\nWe applied an affine layer and sigmoid activation on the last layer output of the [$\\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:\n\nwhere $\\hat{p}_i$ is the output of the model, $\\mathbf {T}^{p/s}_{pos}$ is the positive set and $\\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "398da9bdc79153d546eee6e4ae6fe93746d18a0e" ], "answer": [ { "evidence": [ "Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.", "Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:" ], "extractive_spans": [ "BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling" ], "free_form_answer": "", "highlighted_evidence": [ "Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.\n\nSemantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "b672b6cdf0a9d37299c898c6051439ffdaef0b51" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on FEVER. “LA”=Label Accuracy; “FS”=FEVER Score; “Orcl.” is the oracle upperbound of FEVER Score assuming all downstream modules are perfect. “L-F1 (S/R/N)” means the classification f1 scores on the three verification labels: SUPPORT, REFUTE, and NOT ENOUGH INFO.", "FLOAT SELECTED: Table 3: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on HOTPOTQA.", "Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.", "Next, the removal of sentence-level retrieval module induces a $\\sim $2 point drop on EM and F1 score in the QA task, and a $\\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label." ], "extractive_spans": [ "This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval." ], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on FEVER. “LA”=Label Accuracy; “FS”=FEVER Score; “Orcl.” is the oracle upperbound of FEVER Score assuming all downstream modules are perfect. “L-F1 (S/R/N)” means the classification f1 scores on the three verification labels: SUPPORT, REFUTE, and NOT ENOUGH INFO.", "FLOAT SELECTED: Table 3: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on HOTPOTQA.", "Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.\n\nNext, the removal of sentence-level retrieval module induces a $\\sim $2 point drop on EM and F1 score in the QA task, and a $\\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "What baseline approaches do they compare against?", "How do they train the retrieval modules?", "How do they model the neural retrieval modules?", "Retrieval at what level performs better, sentence level or paragraph level?" ], "question_id": [ "13d92cbc2c77134626e26166c64ca5c00aec0bf5", "9df4a7bd0abb99ae81f0ebb29c488f1caa0f268f", "b7291845ccf08313e09195befd3c8030f28f6a9e", "ac54a9c30c968e5225978a37032158a6ffd4ddb8" ], "question_writer": [ "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668" ], "search_query": [ "retrieval reading comprehension", "retrieval reading comprehension", "retrieval reading comprehension", "retrieval reading comprehension" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: System Overview: blue dotted arrows indicate the inference flow and the red solid arrows indicate the training flow. Grey rounded rectangles are neural modules with different functionality. The two retrieval modules were trained with all positive examples from annotated ground truth set and negative examples sampled from the direct upstream modules. Thus, the distribution of negative examples is subjective to the quality of the upstream module.", "Table 1: Results of systems on HOTPOTQA.", "Table 2: Performance of systems on FEVER. “F1” indicates the sentence-level evidence F1 score. “LA” indicates Label Acc. without considering the evidence prediction. “FS”=FEVER Score (Thorne et al., 2018)", "Table 3: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on HOTPOTQA.", "Table 4: Ablation over the paragraph-level and sentence-level neural retrieval sub-modules on FEVER. “LA”=Label Accuracy; “FS”=FEVER Score; “Orcl.” is the oracle upperbound of FEVER Score assuming all downstream modules are perfect. “L-F1 (S/R/N)” means the classification f1 scores on the three verification labels: SUPPORT, REFUTE, and NOT ENOUGH INFO.", "Figure 2: The results of EM for supporting fact, answer prediction and joint score, and the results of supporting fact precision and recall with different values of kp at paragraph-level retrieval on HOTPOTQA.", "Table 5: System performance on different answer types. “PN”= Proper Noun", "Figure 3: The results of EM for supporting fact, answer prediction and joint score, and the results of supporting fact precision and recall with different values of hs at sentence-level retrieval on HOTPOTQA.", "Figure 4: The results of Label Accuracy, FEVER Score, and Evidence F1 with different values of hs at sentence-level retrieval on FEVER.", "Figure 5: Proportion of answer types.", "Table 6: Hyper-parameter selection for the full pipeline system. h and k are the retrieval filtering hyperparameters mentioned in the main paper. P-level and S-level indicate paragraph-level and sentence-level respectively. “{}” means values enumerated from a set. “[]” means values enumerated from a range with interval=0.1 “BS.”=Batch Size “# E.”=Number of Epochs", "Table 7: Detailed Results of downstream sentence-level retrieval and question answering with different values of hs on HOTPOTQA.", "Table 8: Results with different hs on FEVER.", "Table 9: Detailed Results of downstream sentence-level retrieval and question answering with different values of kp on HOTPOTQA.", "Table 10: HotpotQA correct prediction with sufficient evidence.", "Table 11: HotpotQA incorrect prediction with insufficient/wrong evidence.", "Table 12: HotpotQA incorrect prediction caused by extra incorrect information.", "Table 15: FEVER incorrect prediction due to extra wrong evidence" ], "file": [ "3-Figure1-1.png", "5-Table1-1.png", "5-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "7-Figure2-1.png", "8-Table5-1.png", "8-Figure3-1.png", "8-Figure4-1.png", "8-Figure5-1.png", "10-Table6-1.png", "11-Table7-1.png", "12-Table8-1.png", "12-Table9-1.png", "12-Table10-1.png", "13-Table11-1.png", "13-Table12-1.png", "14-Table15-1.png" ] }
[ "What baseline approaches do they compare against?" ]
[ [ "1909.08041-Results on Benchmarks-2", "1909.08041-5-Table1-1.png", "1909.08041-Results on Benchmarks-1", "1909.08041-Results on Benchmarks-0", "1909.08041-5-Table2-1.png" ] ]
[ "HotspotQA: Yang, Ding, Muppet\nFever: Hanselowski, Yoneda, Nie" ]
479
1907.00854
Katecheo: A Portable and Modular System for Multi-Topic Question Answering
We introduce a modular system that can be deployed on any Kubernetes cluster for question answering via REST API. This system, called Katecheo, includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension. We demonstrate the system using publicly available, pre-trained models and knowledge base articles extracted from Stack Exchange sites. However, users can extend the system to any number of topics, or domains, without the need to modify any of the model serving code. All components of the system are open source and available under a permissive Apache 2 License.
{ "paragraphs": [ [ "When people interact with chatbots, smart speakers or digital assistants (e.g., Siri), one of their primary modes of interaction is information retrieval BIBREF0 . Thus, those that build dialog systems often have to tackle the problem of question answering.", "Developers could support question answering using publicly available chatbot platforms, such as Watson Assistant or DialogFlow. To do this, a user would need to program an intent for each anticipated question with various examples of the question and one or more curated responses. This approach has the advantage of generating high quality answers, but it is limited to those questions anticipated by developers. Moreover, the management burden of such a system might be prohibitive as the number of questions that needs to be supported is likely to increase over time.", "To overcome the burden of programming intents, developers might look towards more advanced question answering systems that are built using open domain question and answer data (e.g., from Stack Exchange or Wikipedia), reading comprehension models, and knowledge base searches. In particular, BIBREF1 previously demonstrated a two step system, called DrQA, that matches an input question to a relevant article from a knowledge base and then uses a recurrent neural network (RNN) based comprehension model to detect an answer within the matched article. This more flexible method was shown to produce promising results for questions related to Wikipedia articles and it performed competitively on the SQuAD benchmark BIBREF2 .", "However, if developers wanted to integrate this sort of reading comprehension based methodology into their applications, how would they currently go about this? They would need to wrap pre-trained models in their own custom code and compile similar knowledge base articles at the very least. At the most, they may need to re-train reading comprehension models on open domain question and answer data (e.g., SQuAD) and/or implement their own knowledge base search algorithms.", "In this paper we present Katecheo, a portable and modular system for reading comprehension based question answering that attempts to ease this development burden. The system provides a quickly deployable and easily extendable way for developers to integrate question answering functionality into their applications. Katecheo includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension. The modules are tied together in a single inference graph that can be invoked via a REST API call. We demonstrate the system using publicly available, pre-trained models and knowledge base articles extracted from Stack Exchange sites. However, users can extend the system to any number of topics, or domains, without the need to modify the model serving code. All components of the system are open source and publicly available under a permissive Apache 2 License.", "The rest of the paper is organized as follows. In the next section, we provide an overview of the system logic and its modules. In Section 3, we outline the architecture and configuration of Katecheo, including extending the system to an arbitrary number of topics. In Section 4, we report some results using example pre-trained models and public knowledge base articles. Then in conclusion, we summarize the system, its applicability, and future development work." ], [ "Katecheo is partially inspired by the work of BIBREF1 on DrQA. That previously developed method has two primary phases of question answering: document retrieval and reading comprehension. Together these functionalities enable open domain question answering. However, many dialog systems are not completely open domain. For example, developers might want to create a chatbot that has targeted conversations about restaurant reservations and movie times. It would be advantageous for such a chatbot to answer questions about food and entertainment, but the developers might not want to allow the conversation to stray into other topics.", "With Katecheo, one of our goals was to create a question answering system that is more flexible than those relying on curated responses while remaining more targeted than a completely open domain question answering system. The system includes document retrieval (or what we refer to as “knowledge base search”) and reading comprehension, but only within sets of curated knowledge base articles each corresponding to a particular topic (e.g., food or entertainment).", "When a question text is input into the Katecheo system, it is processed through four modules: (1) question identification, (2) topic classification, (3) knowledge base search, and (4) reading comprehension. This overall logic is depicted in Figure FIGREF6 ." ], [ "The first module in Katecheo, question identification, determines if the input text (labeled Q in Figure FIGREF6 ) is actually a question. In our experience, users of dialog systems provide a huge number of unexpected inputs. Some of these unexpected inputs are questions and some are just statements. Before going to the trouble of matching a knowledge base article and generating an answer, Katecheo completes this initial step to ensure that the input is a question. If the input is a question, the question identification module (henceforth the “question identifier\") passes a positive indication/flag to the next module indicating that it should continue processing the question. Otherwise, it passes a negative flag to end the processing.", "The question identifier uses a rule-based approach to question identification. As suggested in BIBREF3 , we utilize the presence of question marks and 5W1H words to determine if the input is a question. Based on our testing, this provides quite high performance (90%+ accuracy) and is not a blocker to overall performance." ], [ "To reach our goal of a question answering system that would be more targeted than previous open domain question answering, we decided to allow the user of the system to define one or more topics. The topic classification module of the system (henceforth the “topic classifier\") will attempt to classify the input question into one of the topics and then select a knowledge base article from a set of knowledge base articles corresponding to that topic.", "One way we could enable this topic classification is by training a text classifier that would classify the input text into one of the user supplied topics. However, this approach would require (i) the user to provide both the topic and many example questions within that topic, and (ii) the system to retrain its classification model any time a new topic was added. We wanted to prioritize the ease of deployment, modularity and extensibility of the system, and, thus, we decided to take a slightly more naive approach.", "Along with each topic, the user supplies the system with a pre-trained Named Entity Recognition (NER) model that identifies entities within that topic. The topic classifier then utilizes these pre-trained models to determine if the input question includes entities from one of the user supplied topics. If so, the topic classifier classifies the question into that topic. When two of the topics conflict, the system currently suspends processing and returns a null answer.", "The system accepts NER models that are compatible with spaCy BIBREF4 . As discussed further below, the user can supply a link to a zip file that contains each topic NER model.", "Note, it might be possible to remove the dependence on NER models in the future. We are currently exploring the use of other topic modeling techniques including non-negative matrix factorization and/or Latent Dirichlet Allocation (LDA). These techniques could enable the system to automatically match the input question to most appropriate topical knowledge base, and thus only rely on the user to supply knowledge base articles." ], [ "Once the topic has been identified, a search is made to match the question with an appropriate knowledge base article from a set of user supplied knowledge base articles corresponding to the user supplied topic. This matched article will be utilized in the next stage of processing to generate an answer.", "The user supplied sets of knowledge base articles for each topic are in a JSON format and include a title and body text for each article. The system assumes that the knowledge base articles are in the form of a question and answer knowledge base (e.g., like a Stack Exchange site), rather than any arbitrarily structured articles. In this way, we are able to utilize the titles of the articles (i.e., the questions) in matching to user input questions.", "In the knowledge base search module of Katecheo (henceforth the “KB Search\" module), we use the Python package FuzzyWuzzy to perform string matching between the input question and the knowledge base article titles. FuzzyWuzzy uses Levenshtein Distance BIBREF5 match the input string to one or more input candidate strings.", "We eventually plan to update this knowledge base search to an approach similar to that of BIBREF1 using bigram hashing and TF-IDF. However, the fuzzy string matching approach works reasonably well as long as the supplied knowledge bases are of a type where many of the article titles are in the form of topical questions." ], [ "The final module of the Katecheo system is the reading comprehension (or just “comprehension\") module. This module takes as input the original input question plus the matched knowledge base article body text and uses a reading comprehension model to select an appropriate answer from within the article.", "The current release of Katecheo uses a Bi-Directional Attention Flow, or BiDAF, model for reading comprehension BIBREF6 . This BiDAF model includes a Convolutional Neural Network (CNN) based character level embedding layer, a word embedding layer that uses pre-trained GloVE embeddings, a Long Short-Term Memory Network (LSTM) based contextual embedding layer, an “attention flow layer\", and a modeling layer include bi-directional LSTMs. We are using a pre-trained version of BiDAF available in the AllenNLP BIBREF7 library.", "Future releases of Katecheo will include the ability to swap out the reading comprehension model for newer architectures based on, e.g., BERT BIBREF8 or XLNet BIBREF9 or custom trained models." ], [ "All four of the Katecheo modules are containerized with Docker BIBREF10 and are deployed as pods on top of Kubernetes BIBREF11 (see Figure FIGREF12 ). In this way, Katecheo is completely portable to any standard Kubernetes cluster including hosted versions in AWS, GCP, Digital Ocean, Azure, etc. and on-premises version that use vanilla Kubernetes, OpenShift, CaaS, etc.", "To provide developers with a familiar interface to the question answering system, we provide a REST API interface. Developers can call Katecheo via a single endpoint with ingress to the system provided by Ambassador, a Kubernetes-native API Gateway.", "Seldon-core is used to simplify the routing between the four modules, create the REST API, and manage deployments. To create the Seldon deployment of the four modules, as depicted in Figure FIGREF12 , we: (1) create a Python class for each module that contains standardized Seldon-specified methods and that loads the various models for making predictions; (2) wrap that Python class in a standard, containerized Seldon model server using a public Seldon Docker image and s2i ; (3) push the wrapped Python code to DockerHub ; (4) create a Seldon inference graph that links the modules in a Directed Acyclic Graph (DAG); and (5) deploy the inference graph to Kubernetes. After all of these steps are complete, a single REST API endpoint is exposed. When a user calls this single API endpoint the Seldon inference graph is invoked and the modules are executed using the specified routing logic.", "To specify the topic names, topic NER models, and topic knowledge base JSON files (as mentioned in reference to Figure FIGREF6 ), the user need only fill out a JSON configuration file template in the following format:", "[", " {", " \"name\": \"topic 1 name\",", " \"ner_model\": \"<link>\",", " \"kb_file\": \"<link>\"", " },", " {", " \"name\": \"topic 2 name\",", " \"ner_model\": \"<link>\",", " \"kb_file\": \"<link>\"", " },", " etc...", "]", "", "where each INLINEFORM0 would be replaced with a respective URL containing the NER model or knowledge base JSON file. The linked NER models need to be spaCy compatible and compressed into a single zip file, and the linked knowledge base JSON files need to include both titles and bodies as specified in the Katecheo GitHub repository README file. Once this configuration file is created, a deploy script can be executed to automatically deploy all of the Katecheo modules." ], [ "We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity. These topics are diverse enough that they would warrant different curated sets of knowledge base articles, and we can easily retrieve knowledge base articles for each of these subjects from the Medical Sciences and Christianity Stack Exchange sites, respectively.", "We also have access to NER models for both of these topics. For the Medical Sciences NER model, we utilized the en_ner_bc5cdr_md model from scispaCy BIBREF12 , which is trained on the BC5CDR corpus BIBREF13 . For the Christianity topic, we utilize a custom spaCy NER model trained on annotated data from the GotQuestions website.", "Example inputs and outputs of the system are included in Table TABREF17 . As can be seen, the system is able to match many questions with an appropriate topic and subsequently generate an answer using the BiDAF comprehension model. Not all of the answers would fit into conversational question answering in terms of naturalness, but others show promise.", "There were cases in which the system was not able to classify an input question into an appropriate topic, even when there would have been a closely matching knowledge base article. In particular when testing the system on the Medical Sciences topic, we noticed a higher number of these cases (see the fourth and fifth rows of Table TABREF17 ). This is due to the fact that the pre-trained Medical Sciences NER model from scispaCy is primarily intended to recognize chemical and disease entities within text, not general medical sciences terminology. On the other hand, the NER model utilized for the Christianity topic is more generally applicable within that topic." ], [ "In conclusion, Katecheo is a portable and modular system for reading comprehension based question answering. It is portable because it is built on cloud native technologies (i.e., Docker and Kubernetes) and can be deployed to any cloud or on-premise environment. It is modular because it is composed of four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension.", "Initial usage of the system indicates that it provides a flexible and developer friendly way to enable question answering functionality for multiple topics or domains via REST API. That being said, the current configurations of Katecheo are limited to answering from knowledge bases constructed in a question and answer format, and the current topic classification relies on topical NER models that are compatible with spaCy. In the future, we plan to overcome these limitations by extending our knowledge base search methodology, enabling usage of a wider variety of pre-trained models, and exploring other topic matching/modeling techniques to remove our NER model dependency.", "The complete source code, configuration information, deployment scripts, and examples for Katecheo are available at https://github.com/cvdigitalai/katecheo. A screencast demonstration of Katecheo is available at https://youtu.be/g51t6eRX2Y8." ] ], "section_name": [ "Introduction", "System Overview", "Question Identification", "Topic Classification", "Knowledge Base Search", "Reading Comprehension", "Architecture and Configuration", "Example Usage", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "7f8619d9280a918743612bc1fbcef60ffeeb55e6" ], "answer": [ { "evidence": [ "We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity. These topics are diverse enough that they would warrant different curated sets of knowledge base articles, and we can easily retrieve knowledge base articles for each of these subjects from the Medical Sciences and Christianity Stack Exchange sites, respectively." ], "extractive_spans": [], "free_form_answer": "2", "highlighted_evidence": [ "We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "4d93055ba78109cc42b609ccac07e763855869ce" ], "answer": [ { "evidence": [ "The current release of Katecheo uses a Bi-Directional Attention Flow, or BiDAF, model for reading comprehension BIBREF6 . This BiDAF model includes a Convolutional Neural Network (CNN) based character level embedding layer, a word embedding layer that uses pre-trained GloVE embeddings, a Long Short-Term Memory Network (LSTM) based contextual embedding layer, an “attention flow layer\", and a modeling layer include bi-directional LSTMs. We are using a pre-trained version of BiDAF available in the AllenNLP BIBREF7 library.", "Future releases of Katecheo will include the ability to swap out the reading comprehension model for newer architectures based on, e.g., BERT BIBREF8 or XLNet BIBREF9 or custom trained models.", "Architecture and Configuration" ], "extractive_spans": [ "BiDAF", "BERT " ], "free_form_answer": "", "highlighted_evidence": [ "The current release of Katecheo uses a Bi-Directional Attention Flow, or BiDAF, model for reading comprehension BIBREF6 .", "Future releases of Katecheo will include the ability to swap out the reading comprehension model for newer architectures based on, e.g., BERT BIBREF8 or XLNet BIBREF9 or custom trained models.\n\nArchitecture and Configuration" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "how many domains did they experiment with?", "what pretrained models were used?" ], "question_id": [ "4d5e2a83b517e9c082421f11a68a604269642f29", "2c3b2c3bab6d18cb0895462e3cfd91cd0dee7f7d" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 1: The overall processing flow in Katecheo. Q represents the input question text, the dashed lines represent a flag passed between modules indicating whether the next module should proceed with processing, and the cylinders represent various data inputs to the modules.", "Figure 2: The overall Katecheo architecture. Each node in Kubernetes may be a cloud instance or on-premise machine.", "Table 1: Example inputs, outputs, and matched topics from a Katecheo system deployed to provide question answering on two topics, Medical Sciences and Christianity." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png" ] }
[ "how many domains did they experiment with?" ]
[ [ "1907.00854-Example Usage-0" ] ]
[ "2" ]
481
1811.01734
Transductive Learning with String Kernels for Cross-Domain Text Classification
For many text classification tasks, there is a major problem posed by the lack of labeled data in a target domain. Although classifiers for a target domain can be trained on labeled text data from a related source domain, the accuracy of such classifiers is usually lower in the cross-domain setting. Recently, string kernels have obtained state-of-the-art results in various text classification tasks such as native language identification or automatic essay scoring. Moreover, classifiers based on string kernels have been found to be robust to the distribution gap between different domains. In this paper, we formally describe an algorithm composed of two simple yet effective transductive learning approaches to further improve the results of string kernels in cross-domain settings. By adapting string kernels to the test set without using the ground-truth test labels, we report significantly better accuracy rates in cross-domain English polarity classification.
{ "paragraphs": [ [ "", "Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understand and address this problem, generated by the lack of labeled data in a target domain, researchers have studied the behavior of machine learning methods in cross-domain settings BIBREF2 , BIBREF11 , BIBREF10 and came up with various domain adaptation techniques BIBREF12 , BIBREF5 , BIBREF6 , BIBREF9 . In cross-domain classification, a classifier is trained on data from a source domain and tested on data from a (different) target domain. The accuracy of machine learning methods is usually lower in the cross-domain setting, due to the distribution gap between different domains. However, researchers proposed several domain adaptation techniques by using the unlabeled test data to obtain better performance BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF7 . Interestingly, some recent works BIBREF10 , BIBREF17 indicate that string kernels can yield robust results in the cross-domain setting without any domain adaptation. In fact, methods based on string kernels have demonstrated impressive results in various text classification tasks ranging from native language identification BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and authorship identification BIBREF22 to dialect identification BIBREF23 , BIBREF17 , BIBREF24 , sentiment analysis BIBREF10 , BIBREF25 and automatic essay scoring BIBREF26 . As long as a labeled training set is available, string kernels can reach state-of-the-art results in various languages including English BIBREF19 , BIBREF10 , BIBREF26 , Arabic BIBREF27 , BIBREF20 , BIBREF17 , BIBREF24 , Chinese BIBREF25 and Norwegian BIBREF20 . Different from all these recent approaches, we use unlabeled data from the test set in a transductive setting in order to significantly increase the performance of string kernels. In our recent work BIBREF28 , we proposed two transductive learning approaches combined into a unified framework that improves the results of string kernels in two different tasks. In this paper, we provide a formal and detailed description of our transductive algorithm and present results in cross-domain English polarity classification.", "The paper is organized as follows. Related work on cross-domain text classification and string kernels is presented in Section SECREF2 . Section SECREF3 presents our approach to obtain domain adapted string kernels. The transductive transfer learning method is described in Section SECREF4 . The polarity classification experiments are presented in Section SECREF5 . Finally, we draw conclusions and discuss future work in Section SECREF6 .", "" ], [ "" ], [ "Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods BIBREF1 , BIBREF29 , BIBREF9 , BIBREF30 , probabilistic models BIBREF3 , BIBREF4 , knowledge-based models BIBREF14 , BIBREF31 , BIBREF11 and joint optimization frameworks BIBREF12 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification BIBREF12 , text classification BIBREF13 , BIBREF16 , BIBREF3 , polarity classification BIBREF1 , BIBREF29 , BIBREF4 , BIBREF6 , BIBREF30 and others BIBREF32 .", "General transfer learning approaches. Long et al. BIBREF12 proposed a novel transfer learning framework to model distribution adaptation and label propagation in a unified way, based on the structural risk minimization principle and the regularization theory. Shu et al. BIBREF5 proposed a method that bridges the distribution gap between the source domain and the target domain through affinity learning, by exploiting the existence of a subset of data points in the target domain that are distributed similarly to the data points in the source domain. In BIBREF7 , deep learning is employed to jointly optimize the representation, the cross-domain transformation and the target label inference in an end-to-end fashion. More recently, Sun et al. BIBREF8 proposed an unsupervised domain adaptation method that minimizes the domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Chang et al. BIBREF9 proposed a framework based on using a parallel corpus to calibrate domain-specific kernels into a unified kernel for leveraging graph-based label propagation between domains.", "Cross-domain text classification. Joachims BIBREF13 introduced the Transductive Support Vector Machines (TSVM) framework for text classification, which takes into account a particular test set and tries to minimize the error rate for those particular test samples. Ifrim et al. BIBREF14 presented a transductive learning approach for text classification based on combining latent variable models for decomposing the topic-word space into topic-concept and concept-word spaces, and explicit knowledge models with named concepts for populating latent variables. Guo et al. BIBREF16 proposed a transductive subspace representation learning method to address domain adaptation for cross-lingual text classification. Zhuang et al. BIBREF3 presented a probabilistic model, by which both the shared and distinct concepts in different domains can be learned by the Expectation-Maximization process which optimizes the data likelihood. In BIBREF33 , an algorithm to adapt a classification model by iteratively learning domain-specific features from the unlabeled test data is described.", "Cross-domain polarity classification. In recent years, cross-domain sentiment (polarity) classification has gained popularity due to the advances in domain adaptation on one side, and to the abundance of documents from various domains available on the Web, expressing positive or negative opinion, on the other side. Some of the general domain adaptation frameworks have been applied to polarity classification BIBREF3 , BIBREF33 , BIBREF9 , but there are some approaches that have been specifically designed for the cross-domain sentiment classification task BIBREF0 , BIBREF34 , BIBREF1 , BIBREF29 , BIBREF11 , BIBREF4 , BIBREF6 , BIBREF10 , BIBREF30 . To the best of our knowledge, Blitzer et al. BIBREF0 were the first to report results on cross-domain classification proposing the structural correspondence learning (SCL) method, and its variant based on mutual information (SCL-MI). Pan et al. BIBREF1 proposed a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, using domain-independent words as a bridge. Bollegala et al. BIBREF31 used a cross-domain lexicon creation to generate a sentiment-sensitive thesaurus (SST) that groups different words expressing the same sentiment, using unigram and bigram features as BIBREF0 , BIBREF1 . Luo et al. BIBREF4 proposed a cross-domain sentiment classification framework based on a probabilistic model of the author's emotion state when writing. An Expectation-Maximization algorithm is then employed to solve the maximum likelihood problem and to obtain a latent emotion distribution of the author. Franco-Salvador et al. BIBREF11 combined various recent and knowledge-based approaches using a meta-learning scheme (KE-Meta). They performed cross-domain polarity classification without employing any domain adaptation technique. More recently, Fernández et al. BIBREF6 introduced the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. The approach builds term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a highly predictive term that behaves similarly across domains. A graph-based approach for sentiment classification that models the relatedness of different domains based on shared users and keywords is proposed in BIBREF30 .", "" ], [ "In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.", "" ], [ "", "String kernels. Kernel functions BIBREF38 capture the intuitive notion of similarity between objects in a specific domain. For example, in text mining, string kernels can be used to measure the pairwise similarity between text samples, simply based on character n-grams. Various string kernel functions have been proposed to date BIBREF35 , BIBREF38 , BIBREF19 . Perhaps one of the most recently introduced string kernels is the histogram intersection string kernel BIBREF19 . For two strings over an alphabet INLINEFORM0 , INLINEFORM1 , the intersection string kernel is formally defined as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the number of occurrences of n-gram INLINEFORM1 as a substring in INLINEFORM2 , and INLINEFORM3 is the length of INLINEFORM4 . The spectrum string kernel or the presence bits string kernel can be defined in a similar fashion BIBREF19 .", "Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are samples from the set INLINEFORM2 , for all INLINEFORM3 . We then normalize the kernel matrix by dividing each component by the square root of the product of the two corresponding diagonal components: DISPLAYFORM0 ", "We transform the normalized kernel matrix into a radial basis function (RBF) kernel matrix as follows: DISPLAYFORM0 ", "Each row in the RBF kernel matrix INLINEFORM0 is now interpreted as a feature vector. In other words, each sample INLINEFORM1 is represented by a feature vector that contains the similarity between the respective sample INLINEFORM2 and all the samples in INLINEFORM3 . Since INLINEFORM4 includes the test samples as well, the feature vector is inherently adapted to the test set. Indeed, it is easy to see that the features will be different if we choose to apply the string kernel approach on a set of test samples INLINEFORM5 , such that INLINEFORM6 . It is important to note that through the features, the subsequent classifier will have some information about the test samples at training time. More specifically, the feature vector conveys information about how similar is every test sample to every training sample. We next consider the linear kernel, which is given by the scalar product between the new feature vectors. To obtain the final linear kernel matrix, we simply need to compute the product between the RBF kernel matrix and its transpose: DISPLAYFORM0 ", "In this way, the samples from the test set, which are included in INLINEFORM0 , are used to obtain new (transductive) string kernels that are adapted to the test set at hand.", "[!tpb] Transductive Kernel Algorithm", "Input:", " INLINEFORM0 – the training set of INLINEFORM1 training samples and associated class labels;", " INLINEFORM0 – the set of INLINEFORM1 test samples;", " INLINEFORM0 – a kernel function;", " INLINEFORM0 – the number of test samples to be added in the second round of training;", " INLINEFORM0 – a binary kernel classifier.", "Domain-Adapted Kernel Matrix Computation Steps:", " INLINEFORM0 INLINEFORM1 ; INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4 ", " INLINEFORM0 INLINEFORM1 INLINEFORM2 ", " INLINEFORM0 INLINEFORM1 INLINEFORM2 ", " INLINEFORM0 ", " INLINEFORM0 ", "Transductive Kernel Classifier Steps:", " INLINEFORM0 ", " INLINEFORM0 ", " INLINEFORM0 ", " INLINEFORM0 INLINEFORM1 ", " INLINEFORM0 ", " INLINEFORM0 ", " INLINEFORM0 INLINEFORM1 the dual weights of INLINEFORM2 trained on INLINEFORM3 with the labels INLINEFORM4 ", " INLINEFORM0 ", " INLINEFORM0 ; INLINEFORM1 ", " INLINEFORM0 INLINEFORM1 ", " INLINEFORM0 ", " INLINEFORM0 INLINEFORM1 sort INLINEFORM2 in descending order and return the sorted indexes", " INLINEFORM0 ", " INLINEFORM0 ", " INLINEFORM0 ", " INLINEFORM0 ", " INLINEFORM0 ", "Output:", " INLINEFORM0 – the set of predicted labels for the test samples in INLINEFORM1 . ", "" ], [ "", "We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 .", "Notations. We use the following notations in the algorithm. Sets, arrays and matrices are written in capital letters. All collection types are considered to be indexed starting from position 1. The elements of a set INLINEFORM0 are denoted by INLINEFORM1 , the elements of an array INLINEFORM2 are alternatively denoted by INLINEFORM3 or INLINEFORM4 , and the elements of a matrix INLINEFORM5 are denoted by INLINEFORM6 or INLINEFORM7 when convenient. The sequence INLINEFORM8 is denoted by INLINEFORM9 . We use sequences to index arrays or matrices as well. For example, for an array INLINEFORM10 and two integers INLINEFORM11 and INLINEFORM12 , INLINEFORM13 denotes the sub-array INLINEFORM14 . In a similar manner, INLINEFORM15 denotes a sub-matrix of the matrix INLINEFORM16 , while INLINEFORM17 returns the INLINEFORM18 -th row of M and INLINEFORM19 returns the INLINEFORM20 -th column of M. The zero matrix of INLINEFORM21 components is denoted by INLINEFORM22 , and the square zero matrix is denoted by INLINEFORM23 . The identity matrix is denoted by INLINEFORM24 .", "Algorithm description. In steps 8-17, we compute the domain-adapted string kernel matrix, as described in the previous section. In the first learning iteration (when INLINEFORM0 ), we train several classifiers to distinguish each individual class from the rest, according to the one-versus-all (OVA) scheme. In step 27, the kernel classifier INLINEFORM1 is trained to distinguish a class from the others, assigning a dual weight to each training sample from the source domain. The returned column vector of dual weights is denoted by INLINEFORM2 and the bias value is denoted by INLINEFORM3 . The vector of weights INLINEFORM4 contains INLINEFORM5 values, such that the weight INLINEFORM6 corresponds to the training sample INLINEFORM7 . When the test kernel matrix INLINEFORM8 of INLINEFORM9 components is multiplied with the vector INLINEFORM10 in step 28, the result is a column vector of INLINEFORM11 positive or negative scores. Afterwards (step 34), the test samples are sorted in order to maximize the probability of correctly predicted labels. For each test sample INLINEFORM12 , we consider the score INLINEFORM13 (step 32) produced by the classifier for the chosen class INLINEFORM14 (step 31), which is selected according to the OVA scheme. The sorting is based on the hypothesis that if the classifier associates a higher score to a test sample, it means that the classifier is more confident about the predicted label for the respective test sample. Before the second learning iteration, a number of INLINEFORM15 test samples from the top of the sorted list are added to the training set (steps 35-39) for another round of training. As the classifier is more confident about the predicted labels INLINEFORM16 of the added test samples, the chance of including noisy examples (with wrong labels) is minimized. On the other hand, the classifier has the opportunity to learn some useful domain-specific patterns of the test domain. We believe that, at least in the cross-domain setting, the added test samples bring more useful information than noise. We would like to stress out that the ground-truth test labels are never used in our transductive algorithm. Although the test samples are required beforehand, their labels are not necessary. Hence, our approach is suitable in situations where unlabeled data from the target domain can be collected cheaply, and such situations appear very often in practice, considering the great amount of data available on the Web.", "" ], [ "", "Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews.", "Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.", "Evaluation procedure and parameters. We follow the same evaluation methodology of Giménez-Pérez et al. BIBREF10 , to ensure a fair comparison. Furthermore, we use the same kernels, namely the presence bits string kernel ( INLINEFORM0 ) and the intersection string kernel ( INLINEFORM1 ), and the same range of character n-grams (5-8). To compute the string kernels, we used the open-source code provided by Ionescu et al. BIBREF19 , BIBREF40 . For the transductive kernel classifier, we select INLINEFORM2 unlabeled test samples to be included in the training set for the second round of training. We choose Kernel Ridge Regression BIBREF38 as classifier and set its regularization parameter to INLINEFORM3 in all our experiments. Although Giménez-Pérez et al. BIBREF10 used a different classifier, namely Kernel Discriminant Analysis, we observed that Kernel Ridge Regression produces similar results ( INLINEFORM4 ) when we employ the same string kernels. As Giménez-Pérez et al. BIBREF10 , we evaluate our approach in two cross-domain settings. In the multi-source setting, we train the models on all domains, except the one used for testing. In the single-source setting, we train the models on one of the four domains and we independently test the models on the remaining three domains.", "Results in multi-source setting. The results for the multi-source cross-domain polarity classification setting are presented in Table TABREF8 . Both the transductive presence bits string kernel ( INLINEFORM0 ) and the transductive intersection kernel ( INLINEFORM1 ) obtain better results than their original counterparts. Moreover, according to the McNemar's test BIBREF41 , the results on the DVDs, the Electronics and the Kitchen target domains are significantly better than the best baseline string kernel, with a confidence level of INLINEFORM2 . When we employ the transductive kernel classifier (TKC), we obtain even better results. On all domains, the accuracy rates yielded by the transductive classifier are more than INLINEFORM3 better than the best baseline. For example, on the Books domain the accuracy of the transductive classifier based on the presence bits kernel ( INLINEFORM4 ) is INLINEFORM5 above the best baseline ( INLINEFORM6 ) represented by the intersection string kernel. Remarkably, the improvements brought by our transductive string kernel approach are statistically significant in all domains.", "Results in single-source setting. The results for the single-source cross-domain polarity classification setting are presented in Table TABREF9 . We considered all possible combinations of source and target domains in this experiment, and we improve the results in each and every case. Without exception, the accuracy rates reached by the transductive string kernels are significantly better than the best baseline string kernel BIBREF10 , according to the McNemar's test performed at a confidence level of INLINEFORM0 . The highest improvements (above INLINEFORM1 ) are obtained when the source domain contains Books reviews and the target domain contains Kitchen reviews. As in the multi-source setting, we obtain much better results when the transductive classifier is employed for the learning task. In all cases, the accuracy rates of the transductive classifier are more than INLINEFORM2 better than the best baseline string kernel. Remarkably, in four cases (E INLINEFORM3 B, E INLINEFORM4 D, B INLINEFORM5 K and D INLINEFORM6 K) our improvements are greater than INLINEFORM7 . The improvements brought by our transductive classifier based on string kernels are statistically significant in each and every case. In comparison with SFA BIBREF1 , we obtain better results in all but one case (K INLINEFORM8 D). Remarkably, we surpass the other state-of-the-art approaches BIBREF8 , BIBREF39 in all cases.", "" ], [ "", "In this paper, we presented two domain adaptation approaches that can be used together to improve the results of string kernels in cross-domain settings. We provided empirical evidence indicating that our framework can be successfully applied in cross-domain text classification, particularly in cross-domain English polarity classification. Indeed, the polarity classification experiments demonstrate that our framework achieves better accuracy rates than other state-of-the-art methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 . By using the same parameters across all the experiments, we showed that our transductive transfer learning framework can bring significant improvements without having to fine-tune the parameters for each individual setting. Although the framework described in this paper can be generally applied to any kernel method, we focused our work only on string kernel approaches used in text classification. In future work, we aim to combine the proposed transductive transfer learning framework with different kinds of kernels and classifiers, and employ it for other cross-domain tasks." ] ], "section_name": [ "Introduction", "Related Work", "Cross-Domain Classification", "String Kernels", "Transductive String Kernels", "Transductive Kernel Classifier", "Polarity Classification", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "991f8b4557b5094f4ecb286448c4aa53500d177a" ], "answer": [ { "evidence": [ "Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews." ], "extractive_spans": [ "Books", "DVDs", "Electronics", "Kitchen appliances" ], "free_form_answer": "", "highlighted_evidence": [ "For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "49939755398f439dbd628b8d722e611d80c1add5" ], "answer": [ { "evidence": [ "Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews." ], "extractive_spans": [], "free_form_answer": "8000", "highlighted_evidence": [ "For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "39e588100ef8781ed4252eec26b966bd887addf4" ], "answer": [ { "evidence": [ "Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.", "Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0", "We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 ." ], "extractive_spans": [ "string kernels", "SST", "KE-Meta", "SFA", "CORAL", "TR-TrAdaBoost", "Transductive string kernels", "transductive kernel classifier" ], "free_form_answer": "", "highlighted_evidence": [ "We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.", "Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings.", "Our transductive kernel classifier (TKC) approach is composed of two learning iterations. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "a815779da610c0c779be8dc5e22f5d703e13b20e" ], "answer": [ { "evidence": [ "In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification." ], "extractive_spans": [], "free_form_answer": "String kernel is a technique that uses character n-grams to measure the similarity of strings", "highlighted_evidence": [ "String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What domains are contained in the polarity classification dataset?", "How long is the dataset?", "What machine learning algorithms are used?", "What is a string kernel?" ], "question_id": [ "ea51aecd64bd95d42d28ab3f1b60eecadf6d3760", "e4cc2e73c90e568791737c97d77acef83588185f", "cc28919313f897358ef864948c65318dc61cb03c", "b3857a590fd667ecc282f66d771e5b2773ce9632" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1. Multi-source cross-domain polarity classification accuracy rates (in %) of our transductive approaches versus a state-of-the-art baseline based on string kernels [13], as well as SST [3] and KE-Meta [12]. The best accuracy rates are highlighted in bold. The marker * indicates that the performance is significantly better than the best baseline string kernel according to a paired McNemar’s test performed at a significance level of 0.01.", "Table 2. Single-source cross-domain polarity classification accuracy rates (in %) of our transductive approaches versus a state-of-the-art baseline based on string kernels [13], as well as SFA [32], CORAL [40] and TR-TrAdaBoost [15]. The best accuracy rates are highlighted in bold. The marker * indicates that the performance is significantly better than the best baseline string kernel according to a paired McNemar’s test performed at a significance level of 0.01." ], "file": [ "8-Table1-1.png", "9-Table2-1.png" ] }
[ "How long is the dataset?", "What is a string kernel?" ]
[ [ "1811.01734-Polarity Classification-1" ], [ "1811.01734-String Kernels-0" ] ]
[ "8000", "String kernel is a technique that uses character n-grams to measure the similarity of strings" ]
482
1804.08782
Towards an Unsupervised Entrainment Distance in Conversational Speech using Deep Neural Networks
Entrainment is a known adaptation mechanism that causes interaction participants to adapt or synchronize their acoustic characteristics. Understanding how interlocutors tend to adapt to each other's speaking style through entrainment involves measuring a range of acoustic features and comparing those via multiple signal comparison methods. In this work, we present a turn-level distance measure obtained in an unsupervised manner using a Deep Neural Network (DNN) model, which we call Neural Entrainment Distance (NED). This metric establishes a framework that learns an embedding from the population-wide entrainment in an unlabeled training corpus. We use the framework for a set of acoustic features and validate the measure experimentally by showing its efficacy in distinguishing real conversations from fake ones created by randomly shuffling speaker turns. Moreover, we show real world evidence of the validity of the proposed measure. We find that high value of NED is associated with high ratings of emotional bond in suicide assessment interviews, which is consistent with prior studies.
{ "paragraphs": [ [ "Vocal entrainment is an established social adaptation mechanism. It can be loosely defined as one speaker's spontaneous adaptation to the speaking style of the other speaker. Entrainment is a fairly complex multifaceted process and closely associated with many other mechanisms such as coordination, synchrony, convergence etc. While there are various aspects and levels of entrainment BIBREF0 , there is also a general agreement that entrainment is a sign of positive behavior towards the other speaker BIBREF1 , BIBREF2 , BIBREF3 . High degree of vocal entrainment has been associated with various interpersonal behavioral attributes, such as high empathy BIBREF4 , more agreement and less blame towards the partner and positive outcomes in couple therapy BIBREF5 , and high emotional bond BIBREF6 . A good understanding of entrainment provides insights to various interpersonal behaviors and facilitates the recognition and estimation of these behaviors in the realm of Behavioral Signal Processing BIBREF7 , BIBREF8 . Moreover, it also contributes to the modeling and development of `human-like' spoken dialog systems or conversational agents.", "Unfortunately, quantifying entrainment has always been a challenging problem. There is a scarcity of reliable labeled speech databases on entrainment, possibly due to the subjective and diverse nature of its definition. This makes it difficult to capture entrainment using supervised models, unlike many other behaviors. Early studies on entrainment relied on highly subjective and context-dependent manual observation coding for measuring entrainment. The objective methods based on extracted speech features employed classical synchrony measures such as Pearson's correlation BIBREF0 and traditional (linear) time series analysis techniques BIBREF9 . Lee et al. BIBREF10 , BIBREF4 proposed a measure based on PCA representation of prosody and MFCC features of consecutive turns. Most of the these approaches assume a linear relationship between features of consecutive speaker turns which is not necessarily true, given the complex nature of entrainment. For example, the effect of rising pitch or energy can potentially have a nonlinear influence across speakers.", "Recently, various complexity measures (such as largest Lyapunov exponent) of feature streams based on nonlinear dynamical systems modeling showed promising results in capturing entrainment BIBREF5 , BIBREF6 . A limitation of this modeling, however, is the assumption of the short-term stationary or slowly varying nature of the features. While this can be reasonable for global or session-level complexity, the measure is not very meaningful capturing turn-level or local entrainment. Nonlinear dynamical measures also suffer from scalability to a multidimensional feature set, including spectral coefficients such as MFCCs. Further, all of the above metrics are knowledge-driven and do not exploit the vast amount of information that can be gained from existing interactions.", "A more holistic approach is to capture entrainment in consecutive speaker turns through a more robust nonlinear function. Conceptually speaking, such a formulation of entrainment is closely related to the problem of learning a transfer function which maps vocal patterns of one speaker turn to the next. A compelling choice to nonlinearly approximate the transfer function would be to employ Deep Neural Networks (DNNs). This is supported by recent promising applications of deep learning models, both in supervised and unsupervised paradigm, in modeling and classification of emotions and behaviors from speech. For example in BIBREF11 the authors learned, in an unsupervised manner, a latent embedding towards identifying behavior in out-of-domain tasks. Similarly in BIBREF12 , BIBREF13 the authors employ Neural Predictive Coding to derive embeddings that link to speaker characteristics in an unsupervised manner.", "We propose an unsupervised training framework to contextually learn the transfer function that ties the two speakers. The learned bottleneck embedding contains cross-speaker information closely related to entrainment. We define a distance measure between the consecutive speaker turns represented in the bottleneck feature embedding space. We call this metric the Neural Entrainment Distance (NED).", "Towards this modeling approach we use features that have already been established as useful for entrainment. The majority of research BIBREF0 , BIBREF14 , BIBREF10 , BIBREF5 , BIBREF6 focused on prosodic features like pitch, energy, and speech rate. Others also analyzed entrainment in spectral and voice quality features BIBREF10 , BIBREF4 . Unlike classical nonlinear measures, we jointly learn from a multidimensional feature set comprising of prosodic, spectral, and voice quality features.", "We then experimentally investigate the validity and effectiveness of the NED measure in association with interpersonal behavior." ], [ "We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher." ], [ "A number of audio preprocessing steps are required in the entrainment framework for obtaining boundaries of relevant segments of audio from consecutive turns. First, we perform voice activity detection (VAD) to identify the speech regions. Following this, speaker diarization is performed in order to distinguish speech segments spoken by different speakers. However, our training dataset, the Fisher corpus also contains transcripts with speaker turn boundaries as well as timings for pauses within a turn. Since, these time stamps appeared to be reasonably accurate, we use them as oracle VAD and diarization. On the other hand, for the Suicide Risk Assessment corpus, we perform VAD and diarization on raw audio to obtain the turn boundaries. Subsequently, we also split a single turn into inter-pausal units (IPUs) if there is any pause of at least 50 ms present within the turn. For the purpose of capturing entrainment-related information, we only consider the initial and the final IPU of every turn. This is done based on the hypothesis that during a turn-taking, entrainment is mostly prominent between the most recent IPU of previous speaker's turn and the first IPU of the next speaker's turn BIBREF0 ." ], [ "We extract 38 different acoustic features from the segments (IPUs) of our interest. The extracted feature set includes 4 prosody features (pitch, energy and their first order deltas), 31 spectral features (15 MFCCs, 8 MFBs, 8 LSFs) and 3 voice quality features (shimmer and 2 variants of jitter). We found in our early analysis that derivatives of spectral and voice quality features do not seem to contribute significantly to entrainment and hence we do include them for the NED model. The feature extraction is performed with a Hamming window of 25 ms width and 10 ms shift using the OpenSMILE toolkit BIBREF17 . For pitch, we perform an additional post-processing by applying a median-filter based smoothing technique (with a window size of 5 frames) as pitch extraction is not very robust and often prone to errors, such as halving or doubling errors. We also perform z-score normalization of the features across the whole session, except for pitch and energy features, which are normalized by dividing them by their respective means." ], [ "We propose to calculate NED as directional entrainment-related measure from speaker 1 to speaker 2 for a change of turn as shown in Figure FIGREF6 . The segments of interest in this case are the final IPU of speaker 1's turn and the initial IPU of the subsequent turn by speaker 2, marked by the bounding boxes in the figure. As turn-level features, we compute six statistical functionals over all frames in those two IPUs, generating two sets of functionals of features for each pair of turns. The functionals we compute are as follows: mean, median, standard deviation, 1st percentile, 99th percentile and range between 99th and 1st percentile. Thus we obtain INLINEFORM0 turn-level features from each IPU representing the turn. Let us denote the turn-level feature vector of the final IPU of speaker 1 and the initial IPU of speaker 2 as INLINEFORM1 and INLINEFORM2 , respectively, for further discussion in the paper." ], [ "Most work in the entrainment literature directly computes a measure between INLINEFORM0 and INLINEFORM1 (such as correlation BIBREF0 ) or their lower-dimensional representations BIBREF10 . However, one conceptual limitation of all these approaches is that turn-level features INLINEFORM2 and INLINEFORM3 do not only contain the underlying acoustic information that can be entrained across turns, but also speaker-specific, phonetic and paralinguistic information that is specific to the corresponding turns and not influenced by the previous turn (non-entrainable). If we represent those two types of information as vector embeddings, INLINEFORM4 and INLINEFORM5 respectively, we can model turn-level feature vectors INLINEFORM6 as a nonlinear function INLINEFORM7 over them, i.e., INLINEFORM8 and INLINEFORM9 . In this formulation, the distance between INLINEFORM10 and INLINEFORM11 should be zero in the hypothetical case of `perfect' entrainment.", "Our goal is to approximate the inverse mappings that maps the feature vector INLINEFORM0 to entrainment embedding INLINEFORM1 and ideally to learn the same from `perfect' or very highly entrained turns. Unfortunately, in absence of such a dataset, we learn it from consecutive turns in real data where entrainment is present, at least to some extent. As shown in Figure FIGREF6 , we adopt a feed-forward deep neural network (DNN) as an encoder for this purpose.", "The different components of the model are described below:", "First we use INLINEFORM0 as the input to the encoder network. We choose the output of the encoder network, INLINEFORM1 to be undercomplete representation of INLINEFORM2 , by restricting the dimensionality of INLINEFORM3 to be lower than that of INLINEFORM4 .", " INLINEFORM0 is then passed through another feed-forward ( INLINEFORM1 ) network used as decoder to predict INLINEFORM2 . The output of the decoder is denoted as INLINEFORM3 .", "Then INLINEFORM0 and its reference INLINEFORM1 are compared to obtain the loss function of the model, INLINEFORM2 .", "Even though this deep neural network resembles autoencoder architectures, it does not reconstruct itself but rather tries to encode relevant information from one turn to predict the next turn, parallel to BIBREF12 , BIBREF13 , BIBREF11 . Thus the bottleneck embedding INLINEFORM0 can be considered closely related to the entrainment embedding INLINEFORM1 mentioned above." ], [ "In this work, we use two fully connected layers as hidden layers both in the encoder and decoder network. Batch normalization layers and Rectified Linear Unit (ReLU) activation layers (in respective order) are used between fully connected layers in both of the networks. The dimension of the embedding is chosen to be 30. The number of neuron units in the hidden layers are: [ 228 INLINEFORM0 128 INLINEFORM1 30 INLINEFORM2 128 INLINEFORM3 228 ]. We use smooth L1 norm, a variant of L1 norm which is more robust to outliers BIBREF18 , so that", " DISPLAYFORM0 ", "where", " DISPLAYFORM0 ", "and INLINEFORM0 is the dimension of INLINEFORM1 which is 228 in our case.", "For training the network, we choose a subset (80% of all sessions) of Fisher corpus and use all turn-level feature pairs ( INLINEFORM0 ). We employ the Adam optimizer BIBREF19 and a minibatch size of 128 for training the network. The validation error is computed on the validation subset (10% of the data) of the Fisher corpus and the best model is chosen." ], [ "After the unsupervised training phase, we use the encoder network to obtain the embedding representation ( INLINEFORM0 ) from any turn-level feature vector INLINEFORM1 . To quantify the entrainment from a turn to the subsequent turn, we extract turn-level feature vectors from their final and initial IPUs, respectively, denoted as INLINEFORM2 and INLINEFORM3 . Next we encode INLINEFORM4 and INLINEFORM5 using the pretrained encoder network and obtain INLINEFORM6 and INLINEFORM7 as the outputs, respectively. Then we compute a distance measure INLINEFORM8 , which we term Neural Entrainment Distance (NED), between the two turns by taking smooth L1 distance INLINEFORM9 and INLINEFORM10 .", " DISPLAYFORM0 ", "where INLINEFORM0 is defined in Equation (2) and INLINEFORM1 is the dimensionality of the embedding. Note that even though smooth L1 distance is symmetric in nature, our distance measure is still asymmetric because of the directionality in the training of the neural network model." ], [ "We conduct a number of experiments to validate NED as a valid proxy metric for entrainment." ], [ "We first create a fake session ( INLINEFORM0 ) from each real session ( INLINEFORM1 ) by randomly shuffling the speaker turns. Then we run a simple classification experiment of using the NED measure to identify the real session from the pair ( INLINEFORM2 , INLINEFORM3 ). The steps of the experiments are as follows:", "We compute NED for each (overlapping) pair of consecutive turns and their average across the session for both sessions in the pair ( INLINEFORM0 , INLINEFORM1 ).", "The session with lower NED is inferred to be the real one. The hypothesis behind this rule is that higher entrainment is seen across consecutive turns than randomly paired turns and is well captured through a lower value of proposed measure.", "If the inferred real session is indeed the real one, we consider it to be correctly classified.", "We compute classification accuracy averaged over 30 runs (to account for the randomness in creating the fake session) and report it in Table TABREF24 . The experiment is conducted on two datasets: a subset (10%) of Fisher corpus set aside as test data and Suicide corpus. We use a number of baseline measures:", "Baseline 1: smooth L1 distance directly computed between turn-level features ( INLINEFORM0 and INLINEFORM1 )", "Baseline 2: PCA-based symmetric acoustic similarity measure by Lee et al. BIBREF10 ", "Baseline 3: Nonlinear dynamical systems-based complexity measure BIBREF6 .", "", "For the baselines, we conduct the classification experiments in a similar manner. Since Baseline 1 and 2 have multiple measures, we choose the best performing one for reporting, thus providing an upper-bound performance. Also, for baseline 2 we choose the session with higher value of the measure as real, since it measures similarity.", "", "As we can see in Table TABREF24 , our proposed NED measure achieves higher accuracy than all baselines on the Fisher corpus. The accuracy of our measure declines in the Suicide corpus as compared to the Fisher corpus, which is probably due to data mismatch as the model was trained on Fisher (mismatch of acoustics, recording conditions, sampling frequency, interaction style etc.). However, our measure still performs better than all baselines on Suicide corpus.", "" ], [ "According to prior work, both from domain theory BIBREF16 and from experimental validation BIBREF6 , a high emotional bond in patient-therapist interactions in the suicide therapy domain is associated with more entrainment. In this experiment, we compute the correlation of the proposed NED measure with the patient-perceived emotional bond ratings. Since the proposed measure is asymmetric in nature, we compute the measures for both patient-to-therapist and therapist-to-patient entrainment. We also compute the correlation of emotional bond with the baselines used in Experiment 1. We report Pearson's correlation coefficients ( INLINEFORM0 ) for this experiment in Table TABREF26 along with their INLINEFORM1 -values. We test against the null hypothesis INLINEFORM2 that there is no linear association between emotional bond and the candidate measure.", "Results in Table TABREF26 show that the patient-to-therapist NED is negatively correlated with emotional bond with high statistical significance ( INLINEFORM0 ). This negative sign is consistent with previous studies as higher distance in acoustic features indicates lower entrainment. However, the therapist-to-patient NED does not have a significant correlation with emotional bond. A possible explanation for this finding is that the emotional bond is reported by the patient and influenced by the degree of their perceived therapist-entrainment. Thus, equipped with an asymmetric measure, we are also able to identify the latent directionality of the emotional bond metric. The complexity measure (Baseline 2) also shows statistically significant correlation, but the value of INLINEFORM1 is lower than that of the proposed measure.", "To analyze the embeddings encoded by our model, we also compute a t-SNE BIBREF20 transformation of the difference of all patient-to-therapist turn embedding pairs, denoted as INLINEFORM0 in Equation (3). Figure FIGREF27 shows the results of a session with high emotional bond and another one with low emotional bond (with values of 7 and 1 respectively) as a 2-dimensional scatter plot. Visibly there is some separation between the sessions with low and high emotional bond." ], [ "In this work, a novel deep neural network-based Neural Entrainment Distance (NED) measure is proposed for capturing entrainment in conversational speech. The neural network architecture consisting of an encoder and a decoder is trained on the Fisher corpus in an unsupervised training framework and then the measure is defined on the bottleneck embedding. We show that the proposed measure can distinguish between real and fake sessions by capturing presence of entrainment in real sessions. In this way we also validate the natural occurrence of vocal entrainment in dyadic conversations, well-known in psychology literature BIBREF21 , BIBREF22 , BIBREF23 . We further show that the measure for patient-to-therapist direction achieves statistically significant correlation with their perceived emotional bond. The proposed measure is asymmetric in nature and can be useful for analyzing different interpersonal (especially directional) behaviors in many other applications. Given the benefits shown by the unsupervised data-driven approach we will employ Recurrent Neural Networks (RNNs) to better capture temporal dynamics. We also intend to explore (weakly) supervised learning of entrainment using the bottleneck embeddings as features, in presence of session-level annotations." ], [ "The U.S. Army Medical Research Acquisition Activity, 820 Chandler Street, Fort Detrick MD 21702- 5014 is the awarding and administering acquisition office. This work was supported by the Office of the Assistant Secretary of Defense for Health Affairs through the Military Suicide Research Consortium under Award No. W81XWH-10-2-0181, and through the Psychological Health and Traumatic Brain Injury Research Program under Award No. W81XWH-15-1-0632. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the Department of Defense." ] ], "section_name": [ "Introduction", "Datasets", "Preprocessing", "Feature Extraction", "Turn-level Features", "Modeling with Neural Network", "Unsupervised Training of the Model", "Neural Entrainment Distance (NED) Measure", "Experimental Results", "Experiment 1: Classification of real vs. fake sessions", "Experiment 2: Correlation with Emotional Bond", "Conclusion and Future Work", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "3aa161f671fc6091fa79afb961c94e0c78098c78" ], "answer": [ { "evidence": [ "We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher." ], "extractive_spans": [ "Fisher Corpus English Part 1" ], "free_form_answer": "", "highlighted_evidence": [ "We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "bbec0482a5fbd70ce243e2fd95b601b76038cb4b" ], "answer": [ { "evidence": [ "According to prior work, both from domain theory BIBREF16 and from experimental validation BIBREF6 , a high emotional bond in patient-therapist interactions in the suicide therapy domain is associated with more entrainment. In this experiment, we compute the correlation of the proposed NED measure with the patient-perceived emotional bond ratings. Since the proposed measure is asymmetric in nature, we compute the measures for both patient-to-therapist and therapist-to-patient entrainment. We also compute the correlation of emotional bond with the baselines used in Experiment 1. We report Pearson's correlation coefficients ( INLINEFORM0 ) for this experiment in Table TABREF26 along with their INLINEFORM1 -values. We test against the null hypothesis INLINEFORM2 that there is no linear association between emotional bond and the candidate measure." ], "extractive_spans": [], "free_form_answer": "They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating", "highlighted_evidence": [ "In this experiment, we compute the correlation of the proposed NED measure with the patient-perceived emotional bond ratings. Since the proposed measure is asymmetric in nature, we compute the measures for both patient-to-therapist and therapist-to-patient entrainment.", "We report Pearson's correlation coefficients ( INLINEFORM0 ) for this experiment in Table TABREF26 along with their INLINEFORM1 -values. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "Which dataset do they use to learn embeddings?", "How do they correlate NED with emotional bond levels?" ], "question_id": [ "b653f55d1dad5cd262a99502f63bf44c58ccc8cf", "22c802872b556996dd7d09eb1e15989d003f30c0" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: An overview of unsupervised training of the model", "Table 1: Results of Experiment 1: classification accuracy (%) of real vs. fake sessions (averaged over 30 runs; standard deviation shown in parentheses)", "Figure 2: t-SNE plot of difference vector of encoded turn-level embeddings for sessions with low and high emotional bond", "Table 2: Correlation between emotional bond and various measures; TP: therapist-to-patient, PT: patient-to-therapist ∗p < 0.05 indicates statistically significant (strong) correlation" ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "4-Figure2-1.png", "4-Table2-1.png" ] }
[ "How do they correlate NED with emotional bond levels?" ]
[ [ "1804.08782-Experiment 2: Correlation with Emotional Bond-0" ] ]
[ "They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating" ]
483
1909.09270
Named Entity Recognition with Partially Annotated Training Data
Supervised machine learning assumes the availability of fully-labeled data, but in many cases, such as low-resource languages, the only data available is partially annotated. We study the problem of Named Entity Recognition (NER) with partially annotated training data in which a fraction of the named entities are labeled, and all other tokens, entities or otherwise, are labeled as non-entity by default. In order to train on this noisy dataset, we need to distinguish between the true and false negatives. To this end, we introduce a constraint-driven iterative algorithm that learns to detect false negatives in the noisy set and downweigh them, resulting in a weighted training set. With this set, we train a weighted NER model. We evaluate our algorithm with weighted variants of neural and non-neural NER models on data in 8 languages from several language and script families, showing strong ability to learn from partial data. Finally, to show real-world efficacy, we evaluate on a Bengali NER corpus annotated by non-speakers, outperforming the prior state-of-the-art by over 5 points F1.
{ "paragraphs": [ [ "Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather.", "We study the problem of using partial annotations to train a Named Entity Recognition (NER) system. In this setting, all (or most) identified entities are correct, but not all entities have been identified, and crucially, there are no reliable examples of the negative class. The sentence shown in Figure FIGREF2 shows examples of both a gold and a partially annotated sentence. Such partially annotated data is relatively easy to obtain: for example, a human annotator who does not speak the target language may recognize common entities, but not uncommon ones. With no reliable examples of the negative class, the problem becomes one of estimating which unlabeled instances are true negatives and which are false negatives.", "To address the above-mentioned challenge, we present Constrained Binary Learning (CBL) – a novel self-training based algorithm that focuses on iteratively identifying true negatives for the NER task while improving its learning. Towards this end, CBL uses constraints that incorporate background knowledge required for the entity recognition task.", "We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods." ], [ "The supervision paradigm in this paper, partial supervision, falls broadly under the category of semi-supervision BIBREF0, and is closely related to weak supervision BIBREF1 and incidental supervision BIBREF2, in the sense that data is constructed through some noisy process. However, all of the most related work shares a key difference from ours: reliance on a small amount of fully annotated data in addition to the noisy data.", "FernandesBr11 introduces a transductive version of structured perceptron for partially annotated sequences. However, their definition of partial annotation is labels removed at random, so examples from all classes are still available if not contiguous.", "Fidelity Weighted Learning BIBREF3 uses a teacher/student model, in which the teacher has access to (a small amount) of high quality data, and uses this to guide the student, which has access to (a large amount) of weak data.", "HedderichKl18, following GoldbergerBe17, add a noise adaptation layer on top of an LSTM, which learns how to correct noisy labels, given a small amount of training data. We compare against this model in our experiments.", "In the world of weak supervision, Snorkel BIBREF4, BIBREF5, is a system that combines automatic labeling functions with data integration and noise reduction methods to rapidly build large datasets. They rely on high recall and consequent redundancy of the labeling functions. We argue that in certain realistic cases, high-recall candidate identification is unavailable.", "We draw inspiration from the Positive-Unlabeled (PU) learning framework BIBREF6, BIBREF7, BIBREF8, BIBREF9. Originally introduced for document classification, PU learning addresses problems where examples of a single class (for example, sports) are easy to obtain, but a full labeling of all other classes is prohibitively expensive.", "Named entity classification as an instance of PU learning was introduced in Grave14, which uses constrained optimization with constraints similar to ours. However, they only address the problem of named entity classification, in which mentions are given, and the goal is to assign a type to a named-entity (like `location', `person', etc.) as opposed to our goal of identifying and typing named entities.", "Although the task is slightly different, there has been work on building `silver standard' data from Wikipedia BIBREF10, BIBREF11, BIBREF12, using hyperlink annotations as the seed set and propagating throughout the document.", "Partial annotation in various forms has also been studied in the contexts of POS-tagging BIBREF13, word sense disambiguation BIBREF14, temporal relation extraction BIBREF15, dependency parsing BIBREF16, and named entity recognition BIBREF17.", "In particular, BIBREF17 study a similar problem with a few key differences: since they remove entity surfaces randomly, the dataset is too easy; and they do not use constraints on their output. We compare against their results in our experiments.", "Our proposed method is most closely aligned with the Constraint Driven Learning (CoDL) framework BIBREF18, in which an iterative algorithm reminiscent of self-training is guided by constraints that are applied at each iteration." ], [ "Our method assigns instance weights to all negative elements (tokens tagged as O), so that false negatives have low weights, and all other instances have high weights. We calculate weights according to the confidence predictions of a classifier trained iteratively over the partially annotated data. We refer to our method as Constrained Binary Learning (CBL).", "We will first describe the motivation for this approach before moving on to the mechanics. We start with partially annotated data (which we call set $T$) in which some, but not all, positives are annotated (set $P$), and no negative is labeled. By default, we assume that any instance not labeled as positive is labeled as negative as opposed to unlabeled. This data (set $N$) is noisy in the sense that many true positives are labeled as negative (these are false negatives). Clearly, training on $T$ as-is will result in a noisy classifier.", "Two possible approaches are: 1) find the false negatives and label them correctly, or 2) find the false negatives and remove them. The former method affords more training data, but runs the risk of adding noise, which could be worse than the original partial annotations. The latter is more forgiving because of an asymmetry in the penalties: it is important to remove all false negatives in $N$, but inadvertently removing true negatives from $N$ is typically not a problem, especially in NER, where negative examples dominate. Further, a binary model (only two labels) is sufficient in this case, as we need only detect entities, not type them.", "We choose the latter method, but instead of removing false negatives, we adopt an instance-weighting approach, in which each instance is assigned a weight $v_i \\ge 0$ according to confidence in the labeling of that instance. A weight of 0 means that the loss this instance incurs during training will not update the model.", "With this in mind, CBL takes two phases: first, it learns a binary classifier $\\lambda $ using a constrained iterative process modeled after the CODL framework BIBREF18, and depicted in Figure FIGREF5. The core of the algorithm is the train-predict-infer loop. The training process (line 4) is weighted, using weights $V$. At the start, these can be all 1 (Raw), or can be initialized with prior knowledge. The learned model is then used to predict on all of $T$ (line 5). In the inference step (line 6), we take the predictions from the prior round and the constraints $C$ and produce a new labeling on $T$, and a new set of weights $V$. The details of this inference step are presented later in this section. Although our ultimate strategy is simply to assign weights (not change labels), in this inner loop, we update the labels on $N$ according to classifier predictions.", "In the second phase of CBL, we use the $\\lambda $ trained in the previous phase to assign weights to instances as follows:", "Where $P_{\\lambda }(y_i=\\text{O} \\mid x_i)$ is understood as the classifier's confidence that instance $x_i$ takes the negative label. In practice it is sufficient to use any confidence score from the classifier, not necessarily a probability. If the classifier has accurately learned to detect entities, then for all the false negatives in $N$, $P_{\\lambda }(y_i=\\text{O}|x_i)$ is small, which is the goal.", "Ultimately, we send the original multiclass partially annotated dataset along with final weights $V$ to a standard weighted NER classifier to learn a model. No weights are needed at test time." ], [ "So far, we have given a high-level view of the algorithm. In this section, we will give more low-level details, especially as they relate to the specific problem of NER. One contribution of this work is the inference step (line 6), which we address using a constrained Integer Linear Program (ILP) and describe in this section. However, the constraints are based on a value we call the entity ratio. First, we describe the entity ratio, then we describe the constraints and stopping condition of the algorithm." ], [ "We have observed that NER datasets tend to hold a relatively stable ratio of entity tokens to total tokens. We refer to this ratio as $b$, and define it with respect to some labeled dataset as:", "where $N$ is the set of negative examples. Previous work has shown that in fully-annotated datasets the entity ratio tends to be about $0.09 \\pm 0.05$, depending on the dataset and genre BIBREF19. Intuitively, knowledge of the gold entity ratio can help us estimate when we have found all the false negatives.", "In our main experiments, we assume that the entity ratio with respect to the gold labeling is known for each training dataset. A similar assumption was made in ElkanNo08 when determining the $c$ value, and in Grave14 in the constraint determining the percentage of other examples. However, we also show in Section that knowledge of this ratio is not strictly necessary, and a flat value across all datasets produces similar performance.", "With a weighted training set, it is also useful to define the weighted entity ratio.", "When training an NER model on weighted data, one can change the weighted entity ratio to achieve different effects. To make balanced predictions on test, the entity ratio in the training data should roughly match that of the test data BIBREF20. To bias a model towards predicting positives or predicting negatives, the weighted entity ratio can be set higher or lower respectively. This effect is pronounced when using linear methods for NER, but not as clear in neural methods.", "To change the entity ratio, we scale the weights in $N$ by a scaling constant $\\gamma $. Targeting a particular $b^*$, we may write:", "We can solve for $\\gamma $:", "To obtain weights, $v^*_i$, that attain the desired entity ratio, $b^*$, we scale all weights in $N$ by $\\gamma $.", "In the train-predict-infer loop, we balance the weights to a value near the gold ratio before training." ], [ "We encode our constraints with an Integer Linear Program (ILP), shown in Figure FIGREF17. Intuitively, the job of the inference step is to take predictions ($\\hat{T}$) and use knowledge of the task to `fix' them.", "In the objective function (Eqn. DISPLAY_FORM18), token $i$ is represented by two indicator variables $y_{0i}$ and $y_{1i}$, representing negative and positive labels, respectively. Associated prediction scores $C_0$ and $C_1$ are from the classifier $\\lambda $ in the last round of predictions. The first constraint (Eqn. ) encodes the fact that an instance cannot be both an entity and a non-entity.", "The second constraint (Eqn. ) enforces the ratio of positive to total tokens in the corpus to match a required entity ratio. $|T|$ is the total number of tokens in the corpus. $b$ is the required entity ratio, which increases at each iteration. $\\delta $ allows some flexibility, but is small.", "Constraint encodes that instances in $P$ should be labeled positive since they were manually labeled and are by definition trustworthy. We set $\\xi \\ge 0.99$.", "This framework is flexible in that more complex language- or task-specific constraints could be added. For example, in English and many other languages with Latin script, it may help to add a capitalization constraint. In languages with rich morphology, certain suffixes may indicate or contraindicate a named entity. For simplicity, and because of the number of languages in our experiments, we use only a few constraints.", "After the ILP has selected predictions, we assign weights to each instance in preparation for training the next round. The decision process for an instance is:", "This is similar to Equation (DISPLAY_FORM6), except that the set of tokens that the ILP labeled as positive is larger than $P$. With new labels and weights, we start the next iteration.", "The stopping condition for the algorithm is related to the entity ratio. One important constraint (Eqn. ) governs how many positives are labeled at each round. This number starts at $|P|$ and is increased by a small value at each iteration, thereby improving recall. Positive instances are chosen in two ways. First, all instances in $P$ are constrained to be labeled positive (Eqn. ). Second, the objective function ensures that high-confidence positives will be chosen. The stopping condition is met when the number of required positive instances (computed using gold unweighted entity ratio) equals the number of predicted positive instances." ], [ "We measure the performance of our method on 8 different languages using artificially perturbed labels to simulate the partial annotation setting." ], [ "We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.", "The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25." ], [ "We create partial annotations by perturbing gold annotated data in two ways: lowering recall (to simulate missing entities), and lowering precision (to simulate noisy annotations).", "To lower recall, we replace gold named entity tags with $O$ tags (for non-name). We do this by grouping named entity surface forms, and replacing tags on all occurrences of a randomly selected surface form until the desired amount remains. For example, if the token `Bangor' is chosen to be untagged, then every occurrence of `Bangor' will be untagged. We chose this slightly complicated method because the simplest idea (remove mentions randomly) leaves an artificially large diversity of surface forms, which makes the problem of discovering noisy entities easier.", "To lower precision, we tag a random span (of a random start position, and a random length between 1 and 3) with a random named entity tag. We continue this process until we reach the desired precision. When both precision and recall are to be perturbed, the recall adjustment is made first, and then the number of random spans to be added is calculated by the entities that are left." ], [ "In principle, CBL can use any NER method that can be trained with instance weights. We experiment with both non-neural and neural models." ], [ "For our non-neural system, we use a version of Cogcomp NER BIBREF24, BIBREF25 modified to use Weighted Averaged Perceptron. This operates on a weighted training set $D_w = \\lbrace (x_i, y_i, v_i) \\rbrace _{i=1}^N $, where $N$ is the number of training examples, and $v_i \\ge 0$ is the weight on the $i$th training example. In this non-neural system, a training example is a word with context encoded in the features. We change only the update rule, where the learning rate $\\alpha $ is multiplied by the weight:", "We use a standard set of features, as documented in BIBREF24. In order to keep the language-specific resources to a minimum, we did not use any gazetteers for any language. One of the most important features is Brown clusters, trained for 100, 500, and 1000 clusters for the CoNLL languages, and 2000 clusters for the remaining languages. We trained these clusters on Wikipedia text for the four CoNLL languages, and on the same monolingual text used to train the word vectors (described in Section SECREF26)." ], [ "A common neural model for NER is the BiLSTM-CRF model BIBREF26. However, because the Conditional Random Field (CRF) layer calculates loss at the sentence level, we need a different method to incorporate token weights. We use a variant of the CRF that allows partial annotations by marginalizing over all possible sequences BIBREF27.", "When using a standard BiLSTM-CRF model, the loss of a dataset ($D$) composed of sentences ($s$) is calculated as:", "Where $P_\\theta (\\mathbf {y}^{(s)} | \\textbf {x}^{(s)})$ is calculated by the CRF over outputs from the BiLSTM. In the marginal CRF framework, it is assumed that $\\mathbf {y}^{(s)}$ is necessarily partial, denoted as $\\mathbf {y}^{(s)}_p$. To incorporate partial annotations, the loss is calculated by marginalizing over all possible sequences consistent with the partial annotations, denoted as $C(\\mathbf {y}_p^s)$.", "However, this formulation assumes that all possible sequences are equally likely. To address this, BIBREF17 introduced a way to weigh sequences.", "It's easy to see that this formulation is a generalization of the standard CRF if $q(.)=1$ for the gold sequence $\\mathbf {y}$, and 0 for all others.", "The product inside the summation depends on tag transition probabilities and tag emission probabilities, as well as token-level “weights\" over the tagset. These weights can be seen as defining a soft gold labeling for each token, corresponding to confidence in each label.", "For clarity, define the soft gold labeling over each token $x_i$ as $\\mathbf {G}_i \\in [0,1]^{L}$, where $L$ is the size of the labelset. Now, we may define $q(.)$ as:", "Where $G_i^{y_i}$ is understood as the weight in $\\mathbf {G}_i$ that corresponds to the label $y_i$.", "We incorporate our instance weights in this model with the following intuitions. Recall that if an instance weight $v_i=0$, this indicates low confidence in the label on token $x_i$, and therefore the labeling should not update the model at training time. Conversely, if $v_i=1$, then this label is to be trusted entirely.", "If $v_i=0$, we set the soft labeling weights over $x_i$ to be uniform, which is as good as no information. Since $v_i$ is defined as confidence in the O label, the soft labeling weight for O increases proportionally to $v_i$. Any remaining probability mass is distributed evenly among the other labels.", "To be precise, for tokens in $N$, we calculate values for $\\mathbf {G}_i$ as follows:", "For example, consider phase 1 of Constrained Binary Learning, in which the labelset is collapsed to two labels ($L=2$). Assuming that the O label has index 0, then if $v_i=0$, then $\\mathbf {G}_i = [0.5, 0.5]$. If $v_i=0.6$, then $\\mathbf {G}_i = [0.6, 0.4]$.", "For tokens in $P$ (which have some entity label with high confidence), we always set $\\mathbf {G}_i$ with 1 in the given label index, and 0 elsewhere.", "We use pretrained GloVe BIBREF28 word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. The other languages are distributed with monolingual text BIBREF23, which we used to train our own skip-n-gram vectors." ], [ "We compare against several baselines, including two from prior work." ], [ "The simplest baseline is to do nothing to the partially annotated data and train on it as is." ], [ "Although CBL works with no initialization (that is, all tokens with weight 1), we found that a good weighting scheme can boost performance for certain models. We design weighting schemes that give instances in $N$ weights corresponding to an estimate of the label confidence. For example, non-name tokens such as respectfully should have weight 1, but possible names, such as Russell, should have a low weight, or 0. We propose two weighting schemes: frequency-based and window-based.", "For the frequency-based weighting scheme, we observed that names have relatively low frequency (for example, Kennebunkport, Dushanbe) and common words are rarely names (for example the, and, so). We weigh each instance in $N$ according to its frequency.", "where $freq(x_i)$ is the frequency of the $i^{th}$ token in $N$ divided by the count of the most frequent token. In our experiments, we computed frequencies over $P+N$, but these could be estimated on any sufficiently large corpus. We found that the neural model performed poorly when the weights followed a Zipfian distribution (e.g. most weights very small), so for those experiments, we took the log of the token count before normalizing.", "For the window-based weighting scheme, noting that names rarely appear immediately adjacent to each other in English text, we set weights for tokens within a window of size 1 of a name (identified in $P$) to be $1.0$, and for tokens farther away to be 0.", "where $d_i$ is the distance of the $i^{th}$ token to the nearest named entity in $P$.", "Finally, we combine the two weighting schemes as:" ], [ "BIBREF17 propose a model based on marginal CRF BIBREF27 (described in Section SECREF26). They follow a self-training framework with cross-validation, using the trained model over all but one fold to update gold labeling distributions in the final fold. This process continues until convergence. They use a partial-CRF framework similar to ours, but taking predictions at face value, without constraints." ], [ "Following BIBREF30, we used a neural network with a noise adaptation layer. This extra layer attempts to correct noisy examples given a probabilistic confusion matrix of label noise. Since this method needs a small amount of labeled data, we selected 500 random tokens to be the gold training set, in addition to the partial annotations.", "As with our BiLSTM experiments, we use pretrained GloVe word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. We omit results from the remaining languages because the scores were substantially worse even than training on raw annotations." ], [ "We show results from our experiments in Table TABREF30. In all experiments, the training data is perturbed at 90% precision and 50% recall. These parameters are similar to the scores obtained by human annotators in a foreign language (see Section SECREF5). We evaluate each experiment with both non-neural and neural methods.", "First, to get an idea of the difficulty of NER in each language, we report scores from models trained on gold data without perturbation (Gold). Then we report results from an Oracle Weighting scheme (Oracle Weighting) that takes partially annotated data and assigns weights with knowledge of the true labels. Specifically, mislabeled entities in set $N$ are given weight 0, and all other tokens are given weight 1.0. This scheme is free from labeling noise, but should still get lower scores than Gold because of the smaller number of entities. Since our method estimates these weights, we do not expect CBL to outperform the Oracle method. Next, we show results from all baselines. The bottom two sections are our results, first with no initialization (Raw), and CBL over that, then with Combined Weighting initialization, and CBL over that." ], [ "Regardless of initialization or model, CBL improves over the baselines. Our best model, CBL-Raw BiLSTM-CRF, improves over the Raw Annotations BiLSTM-CRF baseline by 11.2 points F1, and the Self-training prior work by 2.6 points F1, showing that it is an effective way to address the problem of partial annotation. Further, the best CBL version for each model is within 3 points of the corresponding Oracle ceiling, suggesting that this weighting framework is nearly saturated.", "The Combined weighting scheme is surprisingly effective for the non-neural model, which suggests that the intuition about frequency as distinction between names and non-names holds true. It gives modest improvement in the neural model. The Self-training method is effective, but is outperformed by our best CBL method, a difference we discuss in more detail in Section SECREF43. The Noise Adaptation method outperforms the Raw annotations Cogcomp baseline in most cases, but does not reach the performance of the Self-training method, despite using some fully labeled data.", "It is instructive to compare the neural and non-neural versions of each setup. The neural method is better overall, but is less able to learn from the knowledge-based initialization weights. In the non-neural method, the difference between Raw and Combined is nearly 20 points, but the difference in the neural model is less than 3 points. Combined versions of the non-neural method outperform the neural method on 3 languages: Dutch, Arabic, and Hindi. Further, in the neural method, CBL-Raw is always worse than CBL-Combined. This may be due to the way that weights are used in each model. In the non-neural model, a low enough weight completely cancels the token, whereas in the neural model it is still used in training. Since the neural model performs well in the Oracle setting, we know that it can learn from hard weights, but it may have trouble with the subtle differences encoded in frequencies. We leave it to future work to discover improved ways of incorporating instance weights in a BiLSTM-CRF.", "In seeking to understand the details of the other results, we need to consider the precision/recall tradeoff. First, all scores in the Gold row had higher precision than recall. Then, training on raw partially annotated data biases a classifier strongly towards predicting few entities. All results from the Raw annotations row have precision more than double the recall (e.g. Dutch Precision, Recall, F1 were: 91.5, 32.4, 47.9). In this context, the problem this paper explores is how to improve the recall of these datasets without harming the precision." ], [ "While our method has several superficial similarities with prior work, most notably BIBREF17, there are some crucial differences.", "Our methods are similar in that they both use a model trained at each step to assign a soft gold-labeling to each token. Each algorithm iteratively trains models using weights from the previous steps.", "One difference is that BIBREF17 use cross-validation to train, while we follow BIBREF18 and retrain with the entire training set at each round.", "However, the main difference has to do with the focus of each algorithm. Recall the discussion in Section SECREF3 regarding the two possible approaches of 1) find the false negatives and label them correctly, and 2) find the false negatives and remove them. Conceptually, the former was the approach taken by BIBREF17, the latter was our approach. Another way to look at this is as focusing on predicting correct tag labels (BIBREF17) or focus on predicting O tags with high confidence (ours).", "Even though they use soft labeling (which they show to be consistently better than hard labeling), it is possible that the predicted tag distribution is incorrect. Our approach allows us to avoid much of the inevitable noise that comes from labelling with a weak model." ], [ "So far our experiments have shown effectiveness on artificially perturbed labels, but one might argue that these systematic perturbations don't accurately simulate real-world noise. In this section, we show how our methods work in a real-world scenario, using Bengali data partially labeled by non-speakers." ], [ "In order to compare with prior work, we used the train/test split from ZPWVJKM16. We removed all gold labels from the train split, romanized it BIBREF31, and presented it to two non-Bengali speaking annotators using the TALEN interface BIBREF32. The instructions were to move quickly and annotate names only when there is high confidence (e.g. when you can also identify the English version of the name). They spent about 5 total hours annotating, without using Google Translate. This sort of non-speaker annotation is possible because the text contains many `easy' entities – foreign names – which are noticeably distinct from native Bengali words. For example, consider the following:", "Romanized Bengali: ebisi'ra giliyyaana phinnddale aaja pyaalestaaina adhiinastha gaajaa theke aaja raate ekhabara jaaniyyechhena .", "Translation: ABC's Gillian Fondley has reported today from Gaza under Palestine today.", "The entities are Gillian Findlay, ABC, Palestine, and Gaza. While a fast-moving annotator may not catch most of these, `pyaalestaaina' could be considered an `easy' entity, because of its visual and aural similarity to `Palestine.' A clever annotator may also infer that if Palestine is mentioned, then Gaza may be present.", "Annotators are moving fast and being intentionally non-thorough, so the recall will be low. Since they do not speak Bengali, there are likely to be some mistakes, so the precision may drop slightly also. This is exactly the noisy partial annotation scenario addressed in this paper. The statistics of this data can be seen in Table TABREF49, including annotation scores computed with respect to the gold training data for each annotator, as well as the combined score.", "We show results in Table TABREF50, using the BiLSTM-CRF model. We compare against other low-resource approaches published on this dataset, including two based on Wikipedia BIBREF33, BIBREF12, another based on lexicon translation from a high-resource language BIBREF34. These prior methods operate under somewhat different paradigms than this work, but have the same goal: maximizing performance in the absence of gold training data.", "Raw annotations is defined as before, and gives similar high-precision low-recall results. The Combined Weighting scheme improves over Raw annotations by 10 points, achieving a score comparable to the prior state of the art. Beyond that, CBL-Raw outperforms the prior best by nearly 6 points F1, although CBL-Combined again underwhelms.", "To the best of our knowledge, this is the first result showing a method for non-speaker annotations to produce high-quality NER scores. The simplicity of this method and the small time investment for these results gives us confidence that this method can be effective for many low-resource languages." ], [ "We explore an understudied data scenario, and introduce a new constrained iterative algorithm to solve it. This algorithm performs well in experimental trials in several languages, on both artificially perturbed data, and in a truly low-resource situation." ], [ "This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "" ] ], "section_name": [ "Introduction", "Related Work", "Constrained Binary Learning", "Constrained Binary Learning ::: NER with CBL", "Constrained Binary Learning ::: NER with CBL ::: Entity ratio and Balancing", "Constrained Binary Learning ::: NER with CBL ::: Constraints and Stopping Condition", "Experiments", "Experiments ::: Data", "Experiments ::: Artificial Perturbation", "Experiments ::: NER Models", "Experiments ::: NER Models ::: Non-neural Model", "Experiments ::: NER Models ::: Neural Model", "Experiments ::: Baselines", "Experiments ::: Baselines ::: Raw annotations", "Experiments ::: Baselines ::: Instance Weights", "Experiments ::: Baselines ::: Self-training with Marginal CRF", "Experiments ::: Baselines ::: Neural Network with Noise Adaptation", "Experiments ::: Experimental Setup and Results", "Experiments ::: Analysis", "Experiments ::: Difference from Prior Work", "Bengali Case Study", "Bengali Case Study ::: Non-speaker Annotations", "Conclusions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "3b4d2e3967c74f3e895d0db0bd637ae20798486e" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 5: Bengali manual annotation results. Our methods improve on state of the art scores by over 5 points F1 given a relatively small amount of noisy and incomplete annotations from non-speakers." ], "extractive_spans": [], "free_form_answer": "52.0%", "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Bengali manual annotation results. Our methods improve on state of the art scores by over 5 points F1 given a relatively small amount of noisy and incomplete annotations from non-speakers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "3abf28979fc7a6feb0f9e45db143c4aab552947f" ], "answer": [ { "evidence": [ "We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.", "We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.", "The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25." ], "extractive_spans": [ "Bengali", "English, German, Spanish, Dutch", "Amharic", "Arabic", "Hindi", "Somali " ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text.", "We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.\n\nThe remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "What was their F1 score on the Bengali NER corpus?", "Which languages are evaluated?" ], "question_id": [ "a7510ec34eaec2c7ac2869962b69cc41031221e5", "869aaf397c9b4da7ab52d6dd0961887ae08da9ae" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 1: This example has three entities: Arsenal, Unai Emery, and Arsene Wenger. In the Partial row, the situation addressed in this paper, only the first and last are tagged, and all other tokens are assumed to be non-entities, making Unai Emery a false negative as compared to Gold. Our model is an iteratively learned binary classifier used to assign weights to each token indicating its chances of being correctly labeled. The Oracle row shows optimal weights.", "Figure 2: Constrained Binary Learning (CBL) algorithm (phase 1). The core of the algorithm is in the while loop, which iterates over training on T , predicting on T and correcting those predictions.", "Table 1: Data statistics for all languages, showing number of tags and tokens in Train and Test. The tag counts represent individual spans, not tokens. That is, “[Barack Obama]PER” counts as one tag, not two. The b column shows the entity ratio as a percentage.", "Table 2: F1 scores on English, German, Spanish, Dutch, Amharic, Arabic, Hindi, and Somali. Each section shows performance of both Cogcomp (non-neural) and BiLSTM (neural) systems. Gold is using all available gold training data to train. Oracle Weighting uses full entity knowledge to set weights on N . The next section shows prior work, followed by our methods. The column to the farthest right shows the average score over all languages. Bold values are the highest per column. On average, our best results are found in the uninitialized (Raw) CBL from BiLSTM-CRF.", "Table 3: Experimenting with different entity ratios. Scores reported are average F1 across all languages. Gold b value refers to using the gold annotated data to calculate the optimal entity ratio. This table shows that exact knowledge of the entity ratio is not required for CBL to succeed.", "Table 4: Bengali Data Statistics. The P/R/F1 scores are computed for the non-speaker annotator with respect to the gold training data.", "Table 5: Bengali manual annotation results. Our methods improve on state of the art scores by over 5 points F1 given a relatively small amount of noisy and incomplete annotations from non-speakers." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "9-Table5-1.png" ] }
[ "What was their F1 score on the Bengali NER corpus?" ]
[ [ "1909.09270-9-Table5-1.png" ] ]
[ "52.0%" ]
484
1806.04524
Learning to Automatically Generate Fill-In-The-Blank Quizzes
In this paper we formalize the problem automatic fill-in-the-blank question generation using two standard NLP machine learning schemes, proposing concrete deep learning models for each. We present an empirical study based on data obtained from a language learning platform showing that both of our proposed settings offer promising results.
{ "paragraphs": [ [ "With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge extraction and representation. For example, efforts have been made to design games with the purpose of semi-automating a wide range of knowledge transfer tasks, such as educational quizzes, by leveraging on this kind of data.", "In particular, quizzes based on multiple choice questions (MCQs) have been proved efficient to judge students’ knowledge. However, manual construction of such questions often results a time-consuming and labor-intensive task.", "Fill-in-the-blank questions, where a sentence is given with one or more blanks in it, either with or without alternatives to fill in those blanks, have gained research attention recently. In this kind of question, as opposed to MCQs, there is no need to generate a WH style question derived from text. This means that the target sentence could simply be picked from a document on a corresponding topic of interest which results easier to automate.", "Fill-in-the-blank questions in its multiple-choice answer version, often referred to as cloze questions (CQ), are commonly used for evaluating proficiency of language learners, including official tests such as TOEIC and TOEFL BIBREF0 . They have also been used to test students knowledge of English in using the correct verbs BIBREF1 , prepositions BIBREF2 and adjectives BIBREF3 . BIBREF4 and BIBREF5 generated questions to evaluate student’s vocabulary.", "The main problem in CQ generation is that it is generally not easy to come up with appropriate distractors —incorrect options— without rich experience. Existing approaches are mostly based on domain-specific templates, whose elaboration relies on experts. Lately, approaches based on discriminative methods, which rely on annotated training data, have also appeared. Ultimately, these settings prevent end-users from participating in the elaboration process, limiting the diversity and variation of quizzes that the system may offer.", "In this work we formalize the problem of automatic fill-in-the-blank question generation and present an empirical study using deep learning models for it in the context of language learning. Our study is based on data obtained from our language learning platform BIBREF6 , BIBREF7 , BIBREF8 where users can create their own quizzes by utilizing freely available and open-licensed video content on the Web. In the platform, the automatic quiz creation currently relies on hand-crafted features and rules, making the process difficult to adapt. Our goal is to effectively provide an adaptive learning experience in terms of style and difficulty, and thus better serve users' needs BIBREF9 . In this context, we study the ability of our proposed architectures in learning to generate quizzes based on data derived of the interaction of users with the platform." ], [ "The problem of fill-in-the-blank question generation has been studied in the past by several authors. Perhaps the earlies approach is by BIBREF1 , who proposed a cloze question generation system which focuses on distractor generation using search engines to automatically measure English proficiency. In the same research line, we also find the work of BIBREF2 , BIBREF3 and BIBREF4 . In this context, the work of BIBREF10 probably represents the first effort in applying machine learning techniques for multiple-choice cloze question generation. The authors propose an approach that uses conditional random fields BIBREF11 based on hand-crafted features such as word POS tags.", "More recent approaches also focus on the problem of distractor selection or generation but apply it to different domains. For example, BIBREF12 , present a system which adopts a semi-structured approach to generate CQs by making use of a knowledge base extracted from a Cricket portal. On the other hand, BIBREF9 present a generic semi-automatic system for quiz generation using linked data and textual descriptions of RDF resources. The system seems to be the first that can be controlled by difficulty level. Authors tested it using an on-line dataset about wildlife provided by the BBC. BIBREF13 present an approach automatic for CQs generation for student self-assessment.", "Finally, the work of BIBREF0 presents a discriminative approach based on SVM classifiers for distractor generation and selection using a large-scale language learners’ corpus. The SVM classifier works at the word level and takes a sentence in which the target word appears, choosing a verb as the best distractor given the context. Again, the SVM is based on human-engineered features such as n-grams, lemmas and dependency tags.", "Compared to approaches above, our take is different since we work on fill-in-the-blank question generation without multiple-choice answers. Therefore, our problem focuses on word selection —the word to blank— given a sentence, rather than on distractor generation. To the best of our knowledge, our system is also the first to use representation learning for this task." ], [ "We formalize the problem of automatic fill-on-the-blanks quiz generation using two different perspectives. These are designed to match with specific machine learning schemes that are well-defined in the literature. In both cases. we consider a training corpus of INLINEFORM0 pairs INLINEFORM1 where INLINEFORM2 is a sequence of INLINEFORM3 tokens and INLINEFORM4 is an index that indicates the position that should be blanked inside INLINEFORM5 .", "This setting allows us to train from examples of single blank-annotated sentences. In this way, in order to obtain a sentence with several blanks, multiple passes over the model are required. This approach works in a way analogous to humans, where blanks are provided one at a time." ], [ "Firstly, we model the AQG as a sequence labeling problem. Formally, for an embedded input sequence INLINEFORM0 we build the corresponding label sequence by simply creating a one-hot vector of size INLINEFORM1 for the given class INLINEFORM2 . This vector can be seen as a sequence of binary classes, INLINEFORM3 , where only one item (the one in position INLINEFORM4 ) belongs to the positive class. Given this setting, the conditional probability of an output label is modeled as follows: DISPLAYFORM0 ", "Where, in our, case, function INLINEFORM0 is modeled using a bidirectional LSTM BIBREF14 . Each predicted label distribution INLINEFORM1 is then calculated using the following formulas. DISPLAYFORM0 ", "The loss function is the average cross entropy for the mini-batch. Figure FIGREF5 summarizes the proposed model. DISPLAYFORM0 " ], [ "In this case, since the output of the model is a position in the input sequence INLINEFORM0 , the size of output dictionary for INLINEFORM1 is variable and depends on INLINEFORM2 . Regular sequence classification models use a softmax distribution over a fixed output dictionary to compute INLINEFORM3 ) and therefore are not suitable for our case. Therefore, we propose to use an attention-based approach that allows us to have a variable size dictionary for the output softmax, in a way akin to Pointer Networks BIBREF15 . More formally, given an embedded input vector sequence INLINEFORM4 , we use a bidirectional LSTM to first obtain a dense representation of each input token. DISPLAYFORM0 ", "We later use pooling techniques including INLINEFORM0 and INLINEFORM1 to obtain a summarized representation INLINEFORM2 of the input sequence, or simply take the INLINEFORM3 hidden state as a drop-in replacement to do so. After this, we add a global content-based attention layer, which we use to to compare that summarized vector to each hidden state INLINEFORM4 . Concretely, DISPLAYFORM0 ", "Where INLINEFORM0 and INLINEFORM1 are learnable parameters of the model, and the softmax normalizes the vector INLINEFORM2 to be an output distribution over a dictionary of size INLINEFORM3 . Figure FIGREF9 summarizes the proposed model graphically. Then, for a given sentence INLINEFORM4 , the goal of our model is to predict the most likely position INLINEFORM5 of the next word to be blanked." ], [ "Although the hand-crafted rule-based system currently used in our language learning platform offers us good results in general, we are interested in developing a more flexible approach that is easier to tailor depending on the case. In particular, in an adaptive learning setting where the goal is resource allocation according to the unique needs of each learner, rule-based methods for AQG appear to have insufficient flexibility and adaptability to accurately model the features of each learner or teacher.", "With this point in mind, this section presents an empirical study using state-of-the-art Deep Learning approaches for the problem of AQG. In particular, the objective is to test to what extent our prosed models are able to encode the behavior of the rule-based system. Ultimately, we hope that these can be used for a smooth transition from the current human-engineered feature-based system to a fully user-experience-based regime.", "In Natural Language Processing, deep models have succeeded in large part because they learn and use their own continuous numeric representational systems for words and sentences. In particular, distributed representations BIBREF16 applied to words BIBREF17 have meant a major breakthrough. All our models start with random word embeddings, we leave the usage of other pre-trained vectors for future work.", "Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video sources. We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. We split this dataset using the regular 70/10/20 partition for training, validation and testing.", "As the system required the input sentences to be tokenized and makes use of features such as word pos-tags and such, the sentences in our dataset are processed using CoreNLP BIBREF18 . We also extract user-specific and quiz-specific information, including word-level learning records of the user, such as the number of times the learner made a mistake on that word, or whether the learner looked up the word in the dictionary. In this study, however, we restrain our model to only look at word embeddings as input.", "We use the same data pre-processing for all of our models. We build the vocabulary using the train partition of our dataset with a minimum frequency of 1. We do not keep cases and obtain an unknown vocabulary of size 2,029, and a total vocabulary size of 66,431 tokens." ], [ "We use a 2-layer bidirectional LSTM, which we train using Adam BIBREF19 with a learning rate of INLINEFORM0 , clipping the gradient of our parameters to a maximum norm of 5. We use a word embedding size and hidden state size of 300 and add dropout BIBREF20 before and after the LSTM, using a drop probability of 0.2. We train our model for up to 10 epochs. Training lasts for about 3 hours.", "For evaluation, as accuracy would be extremely unbalanced given the nature of the blanking scheme —there is only one positive-class example on each sentence— we use Precision, Recall and F1-Score over the positive class for development and evaluation. Table TABREF11 summarizes our obtained results." ], [ "In this case, we again use use a 2-layer bidirectional LSTM, which we train using Adam with a learning rate of INLINEFORM0 , also clipping the gradient of our parameters to a maximum norm of 5. Even with these limits, convergence is faster than in the previous model, so we only trained the the classifier for up to 5 epochs. Again we use a word embedding and hidden state of 300, and add dropout with drop probability of 0.2 before and after the LSTM. Our results for different pooling strategies showed no noticeable performance difference in preliminary experiments, so we report results using the last hidden state.", "For development and evaluation we used accuracy over the validation and test set, respectively. Table TABREF13 below summarizes our obtained result, we can see that model was able to obtain a maximum accuracy of approximately 89% on the validation and testing sets." ], [ "In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases.", "We have presented an empirical study in which we test the proposed architectures in the context of a language learning platform. Our results show that both the0 proposed training schemes seem to offer fairly good results, with an Accuracy/F1-score of nearly 90%. We think this sets a clear future research direction, showing that it is possible to transition from a heavily hand-crafted approach for AQG to a learning-based approach on the base of examples derived from the platform on unlabeled data. This is specially important in the context of adaptive learning, where the goal is to effectively provide an tailored and flexible experience in terms of style and difficulty", "For future work, we would like to use different pre-trained word embeddings as well as other features derived from the input sentence to further improve our results. We would also like to test the power of the models in capturing different quiz styles from real questions created by professors." ] ], "section_name": [ "Introduction", "Related Work", "Proposed Approach", "AQG as Sequence Labeling", "AQG as Sequence Classification", "Empirical Study", "Sequence Labeling", "Sequence Classification", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "b1aa09cabb48baccf1c48b3cc6175de1f4d88cac" ], "answer": [ { "evidence": [ "Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video sources. We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. We split this dataset using the regular 70/10/20 partition for training, validation and testing." ], "extractive_spans": [], "free_form_answer": "300,000 sentences with 1.5 million single-quiz questions", "highlighted_evidence": [ "We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "3c082867258ce66dab39835930d69dba2b676dcb" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "57a80d3aa64fc2d089bce4521a331109ebc620cb" ], "answer": [ { "evidence": [ "In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases." ], "extractive_spans": [ "sequence classification", "sequence labeling" ], "free_form_answer": "", "highlighted_evidence": [ "In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "What is the size of the dataset?", "What language platform does the data come from?", "Which two schemes are used?" ], "question_id": [ "675f28958c76623b09baa8ee3c040ff0cf277a5a", "47b00652ac66039aafe886780e86961bfc5b466e", "79443bf3123170da44396b0481364552186abb91" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: Our sequence labeling model based on an LSTM for AQG.", "Figure 2: Our sequence classification model, based on an LSTM for AQG.", "Table 1: Results of the seq. labeling approach." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png" ] }
[ "What is the size of the dataset?" ]
[ [ "1806.04524-Empirical Study-3" ] ]
[ "300,000 sentences with 1.5 million single-quiz questions" ]
488
1612.06897
Fast Domain Adaptation for Neural Machine Translation
Neural Machine Translation (NMT) is a new approach for automatic translation of text from one human language into another. The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is gaining popularity in the research community because it outperformed traditional SMT approaches in several translation tasks at WMT and other evaluation tasks/benchmarks at least for some language pairs. However, many of the enhancements in SMT over the years have not been incorporated into the NMT framework. In this paper, we focus on one such enhancement namely domain adaptation. We propose an approach for adapting a NMT system to a new domain. The main idea behind domain adaptation is that the availability of large out-of-domain training data and a small in-domain training data. We report significant gains with our proposed method in both automatic metrics and a human subjective evaluation metric on two language pairs. With our adaptation method, we show large improvement on the new domain while the performance of our general domain only degrades slightly. In addition, our approach is fast enough to adapt an already trained system to a new domain within few hours without the need to retrain the NMT model on the combined data which usually takes several days/weeks depending on the volume of the data.
{ "paragraphs": [ [ "Due to the fact that Neural Machine Translation (NMT) is reaching comparable or even better performance compared to the traditional statistical machine translation (SMT) models BIBREF0 , BIBREF1 , it has become very popular in the recent years BIBREF2 , BIBREF3 , BIBREF4 . With the great success of NMT, new challenges arise which have already been address with reasonable success in traditional SMT. One of the challenges is domain adaptation. In a typical domain adaptation setup such as ours, we have a large amount of out-of-domain bilingual training data for which we already have a trained neural network model (baseline). Given only an additional small amount of in-domain data, the challenge is to improve the translation performance on the new domain without deteriorating the performance on the general domain significantly. One approach one might take is to combine the in-domain data with the out-of-domain data and train the NMT model from scratch. However, there are two main problems with that approach. First, training a neural machine translation system on large data sets can take several weeks and training a new model based on the combined training data is time consuming. Second, since the in-domain data is relatively small, the out-of-domain data will tend to dominate the training data and hence the learned model will not perform as well on the in-domain test data. In this paper, we reuse the already trained out-of-domain system and continue training only on the small portion of in-domain data similar to BIBREF5 . While doing this, we adapt the parameters of the neural network model to the new domain. Instead of relying completely on the adapted (further-trained) model and over fitting on the in-domain data, we decode using an ensemble of the baseline model and the adapted model which tends to perform well on the in-domain data without deteriorating the performance on the baseline general domain." ], [ "Domain adaptation has been an active research topic for the traditional SMT approach in the last few years. The existing domain adaptation methods can be roughly divided into three different categories.", "First, the out-of-domain training data can be scored by a model built only on the in-domain training data. Based on the scores, we can either use a certain amount of best scoring out-of-domain training data to build a new translation system or assign a weight to each sentence which determines its contribution towards the training a new system. In SMT, this has been done for language model training BIBREF6 , BIBREF7 and translation model training BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . In contrast to SMT, training a NMT system from scratch is time consuming and can easily take several weeks.", "Second, different methods of interpolating in-domain and out-of-domain models BIBREF12 , BIBREF13 , BIBREF14 have been proposed. A widely used approach is to train an additional SMT system based only on the in-domain data in addition to the existing out-of-domain SMT system. By interpolating the phrase tables, the in-domain data can be integrated into the general system. In NMT, we do not have any phrase tables and can not use this method. Nevertheless, integrating the in-domain data with interpolation is faster than building a system from scratch.", "The third approach is called semi-supervised training, where a large in-domain monolingual data is first translated with a machine translation engine into a different language to generate parallel data. The automatic translations have been used for retraining the language model and/or the translation model BIBREF15 , BIBREF16 , BIBREF17 . Parallel data can be created also by back-translating monolingual target language into the source language creating additional parallel data BIBREF18 . The additional parallel training data can be used to train the NMT and obtain. BIBREF18 report substantial improvements when a large amount of back-translated parallel data is used. However, as we mentioned before retraining the NMT model with large training data takes time and in this case it is even more time consuming since we first need to back-translate the target monolingual data and then build a system based on the combination of both the original parallel data and the back-translated data.", "For neural machine translation, BIBREF5 proposed to adapt an already existing NMT system to a new domain with further training on the in-domain data only. The authors report an absolute gain of 3.8 Bleu points compared to using an original model without further training. In our work, we utilize the same approach but ensemble the further trained model with the original model. In addition, we report results on the out-of-domain test sets and show how degradation of the translation performance on the out-of-domain can be avoided. We further show how to avoid over-fitting on the in-domain training data and analyze how many additional epochs are needed to adapt the model to the new domain. We compare our adapted models with models either trained on the combined training data or the in-domain training data only and report results for different amount of in-domain training data." ], [ "In all our experiments, we use our in-house attention-based NMT implementation which is similar to BIBREF4 , BIBREF19 The approach is based on an encoder-decoder network. The encoder employs a bi-directional RNN to encode the source sentence ${\\bf {x}}=({x_1, ... , x_l})$ into a sequence of hidden states ${\\bf {h}}=({h_1, ..., h_l})$ , where $l$ is the length of the source sentence. Each $h_i$ is a concatenation of a left-to-right $\\overrightarrow{h_i}$ and a right-to-left $\\overleftarrow{h_i}$ RNN: $\nh_{i} =\n\\begin{bmatrix}\n\\overleftarrow{h}_i \\\\\n\\overrightarrow{h}_i \\\\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\overleftarrow{f}(x_i, \\overleftarrow{h}_{i+1}) \\\\\n\\overrightarrow{f}(x_i, \\overrightarrow{h}_{i-1}) \\\\\n\\end{bmatrix}\n$ ", "where $\\overleftarrow{f}$ and $\\overrightarrow{f}$ are two gated recurrent units (GRU) proposed by BIBREF20 .", "Given the encoded ${\\bf h}$ , the decoder predicts the target translation by maximizing the conditional log-probability of the correct translation ${\\bf y^*} = (y^*_1, ... y^*_m)$ , where $m$ is the length of the target. At each time $t$ , the probability of each word $y_t$ from a target vocabulary $V_y$ is: ", "$$p(y_t|{\\bf h}, y^*_{t-1}..y^*_1) = g(s_t, y^*_{t-1}, H_{t}),$$ (Eq. 1) ", "where $g$ is a two layer feed-forward neural network over the embedding of the previous target word $y^*_{t-1}$ , the hidden state $s_t$ , and the weighted sum of ${\\bf h}$ ( $H_{t}$ ).", "Before we compute $s_t$ and $H_t$ , we first covert $s_{t-1}$ and the embedding of $y^*_{t-1}$ into an intermediate state $s^{\\prime }_t$ with a GRU $u$ as: ", "$$s^{\\prime }_t = u(s_{t-1}, y^*_{t-1}).$$ (Eq. 2) ", "Then we have $s_t$ as: ", "$$s_t = q(s^{\\prime }_{t}, H_{t})$$ (Eq. 3) ", "where $q$ is a GRU. And the $H_{t}$ is computed as: ", "$$H_t =\n\\begin{bmatrix}\n\\sum _{i=1}^{l}{(\\alpha _{t,i} \\cdot \\overleftarrow{h}_i)} \\\\\n\\sum _{i=1}^{l}{(\\alpha _{t,i} \\cdot \\overrightarrow{h}_i)} \\\\\n\\end{bmatrix},$$ (Eq. 4) ", "The alignment weights, $\\alpha $ in $H_t$ , are computed with a two layer feed-forward neural network $r$ : ", "$$\\alpha _{t,i} = \\frac{\\exp \\lbrace r(s^{\\prime }_{t}, h_{i})\\rbrace }{\\sum _{j=1}^{l}{\\exp \\lbrace r(s^{\\prime }_{t}, h_{j})\\rbrace }}$$ (Eq. 5) " ], [ "Our objectives in domain adaptation are two fold: (1) build an adapted system quickly (2) build a system that performs well on the in-domain test data without significantly degrading the system on a general domain. One possible approach to domain adaptation is to mix (possibly with a higher weight) the in-domain with the large out-of-domain data and retrain the system from scratch. However, training a NMT system on large amounts of parallel data (typically $>$ 4 million sentence pairs) can take several weeks. Therefore, we propose a method that doesn't require retraining on the large out-of-domain data which we can do relatively quickly. Hence, achieving our two objectives.", "Our approach re-uses the already trained baseline model and continues the training for several additional epochs but only on the small amount of in-domain training data. We call this kind of further training a continue model. Depending on the amount of in-domain training data, the continue model can over-fit on the new training data. In general over-fitting means that the model performs excellent on the training data, but worse on any other unseen data. To overcome this problem, we ensemble the continue model with the baseline model. This has the positive side effect that we do not only get better translations for the new domain, but also stay close to the baseline model which performs well in general. As the amount of in-domain training data is usually small, we can quickly adapt our baseline model to a different domain." ], [ "In all our experiments, we use the NMT approach as described in Section \"Neural Machine Translation\" . We limit our source and target vocabularies to be the top $N$ most frequent words for each side accordingly. Words not in these vocabularies are mapped into a special unknown token UNK. During translation, we write the alignments (from the attention mechanism) and use these to replace the unknown tokens either with potential targets (obtained from an IBM model 1 dictionary trained on the parallel data or from the SMT phrase table) or with the source word itself or a transliteration of it (if no target was found in the dictionary, i.e., the word is a genuine OOV). We use an embedding dimension of 620 and fix the RNN GRU layers to be of 1000 cells each. For the training procedure, we use SGD BIBREF21 to update the model parameters with a mini-batch size of 64. The training data is shuffled after each epoch. All experiments are evaluated with both Bleu BIBREF22 and Ter BIBREF23 (both are case-sensitive)." ], [ "For the German $\\rightarrow $ English translation task, we use an already trained out-of-domain NMT system (vocabulary size $N$ =100K) trained on the WMT 2015 training data BIBREF24 (3.9M parallel sentences). As in-domain training data, we use the TED talks from the IWSLT 2015 evaluation campaign BIBREF25 (194K parallel sentences). Corpus statistics can be found in Table 1 . The data is tokenized and the German text is preprocessed by splitting German compound words with the frequency-based method as described in BIBREF26 . We use our in-house language identification tool to remove sentence pairs where either the source or the target is assigned the wrong language by our language ID.", "Experimental results can be found in Table 2 . The translation quality of a NMT system trained only on the in-domain data is not satisfying. In fact, it performs even worse on both test sets compared to the baseline model which is only trained on the out-of-domain data. By continuing the training of the baseline model on the in-domain data only, we get a gain of 4.4 points in Bleu and 3.1 points in Ter on the in-domain test set tst2013 after the second epoch. Nevertheless, we lose 2.1 points in Bleu and 3.9 points in Ter on the out-of-domain test set newstest2014. After continuing the epoch for 20 epochs, the model tends to overfit and the performance of both test sets degrades.", "To avoid over fitting and to keep the out-of-domain translation quality close to the baseline, we ensemble the continue model with the baseline model. After 20 epochs, we only lose 0.2 points in Bleu and 0.6 points in Ter on the out-of-domain test set while we gain 4.2 points in Bleu and 3.7 points in Ter on tst2013. Each epoch of the continue training takes 1.8 hours. In fact, with only two epochs, we already have a very good performing system on the in-domain data. At the same time, the loss of translation quality on the out-of-domain test set is minimal (i.e., negligible). In fact, we get a gain of 0.7 points in Bleu while losing 0.6 points in Ter on our out-of-domain test set.", "Figure 1 illustrates the learning curve of the continue training for different sizes of in-domain training data. For all setups, the translation quality massively drops on the out-of-domain test set. Further, the performance of the in-domain test set degrades as the neural network over-fits on the in-domain training data already after epoch 2.", "To study the impact of the in-domain data size on the quality if the adapted model, we report results for different sizes of the in-domain data. Figure 2 shows the learning curve of the ensemble of the baseline and the continue model for different sizes of in-domain training data. The used in-domain data is a randomly selected subset of the entire pool of the in-domain data available to us. We also report the result when all of the in-domain data in the pool is used. As shown in Figure 2 the translation quality of the out-of-domain test set only degrades slightly for all the different sizes of the in-domain data we tried. However, the performance on the in-domain data significantly improves, reaching its peak just after the second epoch. We do not lose any translation quality on the in-domain test set by continuing the training for more epochs. Adding more in-domain data improved the score on the in-domain test set without seeing any significant degradation on the out-of-domain test set.", "In addition to evaluating on automatic metrics, we also performed a subjective human evaluation where a human annotator assigns a score based on the quality of the translation. The judgments are done by an experienced annotator (a native speaker of German and a fluent speaker of English). We ask our annotator to judge the translation output of different systems on a randomly selected in-domain sample of 50 sentences (maximum sentence length 50). Each source sentence is presented to the annotator with all 3 different translations (baseline/ continue/ ensemble). The translation are presented in a blind fashion (i.e., the annotator is not aware of which system is which) and shuffled in random order. The evaluation is presented to the annotator via a web-page interface with all these translation pairs randomly ordered to disperse the three translations of each source sentence. The annotator judges each translation from 0 (very bad) to 5 (near perfect). The human evaluation results can be found in Table 3 . Both the continue as well as the ensemble of the baseline with the continue model significantly outperforms the baseline model on the in-domain data. Contrary to the automatic scores, the ensemble performs better compared to the continue model.", "We compare the training times of our different setups in Table 4 . Based on the automatic scores, it is sufficient to further train the baseline model for 2 epochs to adapt the model to the new domain. For the case of having only 25K parallel in-domain training data, it takes 30 minutes to adapt the model. If we use all available 192K sentences, the total training time is 3 hours and 40 minutes. By using all training data (both in-domain and out-of-domain together), we need 7 epochs which sum up to a training time of 15 days and 11 hours." ], [ "For the Chinese $\\rightarrow $ English experiments, we utilize a NMT system (vocabulary size $N$ =500K) trained on 11.6 million out-of-domain sentences from the DARPA BOLT project. We use 593k parallel sentences of internal in-domain data that is different to the BOLT informal news domain. Corpus statistics can be found in Table 5 .", "Experimental results can be found in Table 6 . Because the in-domain data is relatively large in this case, training a NMT model from scratch only on the in-domain data gives us similar performance on the in-domain test set compared to the baseline model that is trained only on the out-of-domain data. However, the performance on the out-of-domain test set is significantly worse. By continuing the training of the baseline model only on the in-domain data, we get an improvement of 9.5 points in Bleu and 12.2 points in Ter on the in-domain test set after 6 epochs. Unfortunately, the performance significantly drops on the out-of-domain test set. After 20 epochs, the performance on the in-domain data only further improves slightly while losing much more on the out-of-domain test set.", "To avoid significant degradation to the translation quality on the out-of-domain test set, we ensemble the continue and the baseline models. After 6 epochs, we get a gain of 7.2 points in Bleu and 10 points in Ter on the in-domain test set while losing only slightly on the out-of-domain test set. After 20 epochs, the performance of the in-domain test set is similar while losing additional 1.5 points in Bleu and 1.1 points in Ter on the out-of-domain test set.", "Figure 3 illustrates the learning curves of the continue training for different sizes of in-domain training data. Adding more parallel in-domain training data helps to improve the performance on the in-domain test set. For all different training sizes, the translation quality drops similar on the out-of-domain test set.", "Figure 4 shows the learning curves of the ensemble of the baseline and the continue model for different sizes of in-domain training data. For all training sizes, the translation quality of the out-of-domain test set only degrades slightly. Nevertheless, the performance on the in-domain data significantly improves. We reach a saturation by continuing the training for several epochs on both test sets. Adding more in-domain data improves the score on the in-domain test set.", "Human judgment was performed (cf. Table 7 ) by another experienced annotator (Chinese native speaker whose also fluent in English) on a randomly selected sample of 50 in-domain sentences. As in the German $\\rightarrow $ English case, the annotator assigns a (0-5) score to each translation. Both, the continue as well as the ensemble of the baseline with the continue model outperforms the baseline model. Furthermore, the ensemble of the continue model with the baseline model outperforms the continue training on its own.", "A comparison of the training times of our different setups can be found in Table 8 . Based on our experiments, it is sufficient to further train the baseline for 6 epochs to adapt the neural net to our new domain. By using all available in-domain training data, we have a total training time of 23 hours. The training time for a system based on both in-domain and out-of-domain training data needs already 77 hours and 30 min for one epoch. We trained the combined system for 8 epochs which sum up to a total training time of 620 hours (25 days and 20 hours)." ], [ "We presented an approach for a fast and efficient way to adapt an already existing NMT system to a new domain without degradation of the translation quality on the out-of-domain test set. Our proposed method is based on two main steps: (a) train a model on only on the in-domain data, but initializing all parameters of the the neural network model with the one from the existing baseline model that is trained on the large amount of out-of-domain training data (in other words, we continue the training of the baseline model only on the in-domain data); (b) ensemble the continue model with the baseline model at decoding time. While step (a) can lead to significant gains of up to 9.9 points in Bleu and 12.2 points in Ter on the in-domain test set. However, it comes at the expense of significant degradation to the the translation quality on the original out-of-domain test set. Furthermore, the continue model tends to overfit the small amount of in-domain training data and even degrades translation quality on the in-domain test sets if you train beyond one or two epochs.", "Step (b) (i.e., ensembling of the baseline model with the continue model) ensures that the performance does not drop significantly on the out-of-domain test set while still getting significant improvements of up to 7.2 points in Bleu and 10 points in Ter on the in-domain test set. Even after only few epochs of continue training, we get results that are close to the results obtained after 20 epoch. We also show significant improvements on on human judgment as well. We presented results on two diverse language pairs German $\\rightarrow $ English and Chinese $\\rightarrow $ English (usually very challenging pairs for machine translation)." ] ], "section_name": [ "Introduction", "Related Work", "Neural Machine Translation", "Domain Adaptation", "Experiments", "German→\\rightarrow English", "Chinese→\\rightarrow English", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "3d804c3b0bb79d93ec4c38932f00a50f22b8a389" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 1: German→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). tst2013 is our in-domain and newstest2014 is our out-of-domain test set. The baseline model is only trained on the large amount of out-of-domain data." ], "extractive_spans": [], "free_form_answer": "Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)", "highlighted_evidence": [ "FLOAT SELECTED: Figure 1: German→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). tst2013 is our in-domain and newstest2014 is our out-of-domain test set. The baseline model is only trained on the large amount of out-of-domain data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "five" ], "paper_read": [ "no" ], "question": [ "How many examples do they have in the target domain?" ], "question_id": [ "2a46db1b91de4b583d4a5302b2784c091f9478cc" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "domain adaptation" ], "topic_background": [ "familiar" ] }
{ "caption": [ "Table 1: German→English corpus statistics for in-domain (IWSLT 2015) and out-of-domain (WMT 2015) parallel training data.", "Table 2: Adaptation results for the German→English translation task: tst2013 is the in-domain test set and newstest2014 is the out-of-domain test set. The combined data is the concatenation of the in-domain and out-of-domain training data.", "Figure 1: German→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). tst2013 is our in-domain and newstest2014 is our out-of-domain test set. The baseline model is only trained on the large amount of out-of-domain data.", "Table 3: German→English: Human evaluation on an in-domain sample of 50 sentences. The annotator assigns each sentence a score between 0-5 (higher is better).", "Figure 2: German→English: Learning curve of the ensemble of 2 models: continue training (cf. Figure 1) and baseline model. tst2014 is the in-domain; newstest2014 the out-of-domain test set.", "Table 4: German→English training times per epoch. The setup including both in-domain and out-of-domain training data has 4.1M parallel sentences.", "Table 5: Chinese→English corpus statistics for in-domain and out-of-domain parallel training data.", "Table 6: Chinese→English adaptation results. The adaptation has been utilized on 593k indomain parallel sentences.", "Figure 3: Chinese→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). The baseline is only trained on the large amount of out-of-domain training data.", "Table 7: Human evaluation on a 50 sentence Chinese→English in-domain sample. The annotator assigns each sentence a score between 0-5 (higher is better).", "Figure 4: Chinese→English: Learning curve of the ensemble of 2 models: the continue training (cf. Figure 3) with the baseline model. The smaller training sets are random subsets of the complete in-domain training data.", "Table 8: Chinese→English training times per epoch. The setup including both in-domain and out-of-domain training data has 12.2M parallel sentences." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "6-Figure1-1.png", "6-Table3-1.png", "7-Figure2-1.png", "7-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png", "9-Figure3-1.png", "9-Table7-1.png", "10-Figure4-1.png", "10-Table8-1.png" ] }
[ "How many examples do they have in the target domain?" ]
[ [ "1612.06897-6-Figure1-1.png" ] ]
[ "Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)" ]
489
1912.06813
Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining
We introduce a novel sequence-to-sequence (seq2seq) voice conversion (VC) model based on the Transformer architecture with text-to-speech (TTS) pretraining. Seq2seq VC models are attractive owing to their ability to convert prosody. While seq2seq models based on recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have been successfully applied to VC, the use of the Transformer network, which has shown promising results in various speech processing tasks, has not yet been investigated. Nonetheless, their data-hungry property and the mispronunciation of converted speech make seq2seq models far from practical. To this end, we propose a simple yet effective pretraining technique to transfer knowledge from learned TTS models, which benefit from large-scale, easily accessible TTS corpora. VC models initialized with such pretrained model parameters are able to generate effective hidden representations for high-fidelity, highly intelligible converted speech. Experimental results show that such a pretraining scheme can facilitate data-efficient training and outperform an RNN-based seq2seq VC model in terms of intelligibility, naturalness, and similarity.
{ "paragraphs": [ [ "Voice conversion (VC) aims to convert the speech from a source to that of a target without changing the linguistic content BIBREF0. Conventional VC systems follow an analysis—conversion —synthesis paradigm BIBREF1. First, a high quality vocoder such as WORLD BIBREF2 or STRAIGHT BIBREF3 is utilized to extract different acoustic features, such as spectral features and fundamental frequency (F0). These features are converted separately, and a waveform synthesizer finally generates the converted waveform using the converted features. Past VC studies have focused on the conversion of spectral features while only applying a simple linear transformation to F0. In addition, the conversion is usually performed frame-by-frame, i.e, the converted speech and the source speech are always of the same length. To summarize, the conversion of prosody, including F0 and duration, is overly simplified in the current VC literature.", "This is where sequence-to-sequence (seq2seq) models BIBREF4 can play a role. Modern seq2seq models, often equipped with an attention mechanism BIBREF5, BIBREF6 to implicitly learn the alignment between the source and output sequences, can generate outputs of various lengths. This ability makes the seq2seq model a natural choice to convert duration in VC. In addition, the F0 contour can also be converted by considering F0 explicitly (e.g, forming the input feature sequence by concatenating the spectral and F0 sequences) BIBREF7, BIBREF8, BIBREF9 or implicitly (e.g, using mel spectrograms as the input feature) BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Seq2seq VC can further be applied to accent conversion BIBREF13, where the conversion of prosody plays an important role.", "Existing seq2seq VC models are based on either recurrent neural networks (RNNs) BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 or convolutional neural networks (CNNs) BIBREF9. In recent years, the Transformer architecture BIBREF16 has been shown to perform efficiently BIBREF17 in various speech processing tasks such as automatic speech recognition (ASR) BIBREF18, speech translation (ST) BIBREF19, BIBREF20, and text-to-speech (TTS) BIBREF21. On the basis of attention mechanism solely, the Transformer enables parallel training by avoiding the use of recurrent layers, and provides a receptive field that spans the entire input by using multi-head self-attention rather than convolutional layers. Nonetheless, the above-mentioned speech applications that have successfully utilized the Transformer architecture all attempted to find a mapping between text and acoustic feature sequences. VC, in contrast, attempts to map between acoustic frames, whose high time resolution introduces challenges regarding computational memory cost and accurate attention learning.", "Despite the promising results, seq2seq VC models suffer from two major problems. First, seq2seq models usually require a large amount of training data, although a large-scale parallel corpus, i.e, pairs of speech samples with identical linguistic contents uttered by both source and target speakers, is impractical to collect. Second, as pointed out in BIBREF11, the converted speech often suffers from mispronunciations and other instability problems such as phonemes and skipped phonemes. Several techniques have been proposed to address these issues. In BIBREF10 a pretrained ASR module was used to extract phonetic posteriorgrams (PPGs) as an extra clue, whereas PPGs were solely used as the input in BIBREF13. The use of context preservation loss and guided attention loss BIBREF22 to stabilize training has also been proposed BIBREF8, BIBREF9. Multitask learning and data augmentation were incorporated in BIBREF11 using additional text labels to improve data efficiency, and linguistic and speaker representations were disentangled in BIBREF12 to enable nonparallel training, thus removing the need for a parallel corpus. In BIBREF15 a large hand-transcribed corpus was used to generate artificial training data from a TTS model for a many-to-one (normalization) VC model, where multitask learning was also used.", "One popular means of dealing with the problem of limited training data is transfer leaning, where knowledge from massive, out-of-domain data is utilized to aid learning in the target domain. Recently, TTS systems, especially neural seq2seq models, have enjoyed great success owing to the vast large-scale corpus contributed by the community. We argue that lying at the core of these TTS models is the ability to generate effective intermediate representations, which facilitates correct attention learning that bridges the encoder and the decoder. Transfer learning from TTS has been successfully applied to tasks such as speaker adaptation BIBREF23, BIBREF24, BIBREF25, BIBREF26. In BIBREF27 the first attempt to apply this technique to VC was made by bootstrapping a nonparallel VC system from a pretrained speaker-adaptive TTS model.", "In this work, we propose a novel yet simple pretraining technique to transfer knowledge from learned TTS models. To transfer the core ability, i.e, the generation and utilization of fine representations, knowledge from both the encoder and the decoder is needed. Thus, we pretrain them in separate steps: first, the decoder is pretrained by using a large-scale TTS corpus to train a conventional TTS model. The TTS training ensures a well-trained decoder that can generate high-quality speech with the correct hidden representations. As the encoder must be pretrained to encode input speech into hidden representations that can be recognized by the decoder, we train the encoder in an autoencoder style with the pretrained decoder fixed. This is carried out using a simple reconstruction loss. We demonstrate that the VC model initialized with the above pretrained model parameters can generate high-quality, highly intelligible speech even with very limited training data.", "Our contributions in this work are as follows:", "We apply the Transformer network to VC. To our knowledge, this is the first work to investigate this combination.", "We propose a TTS pretraining technique for VC. The pretraining process provides a prior for fast, sample-efficient VC model learning, thus reducing the data size requirement and training time. In this work, we verify the effectiveness of this scheme by transferring knowledge from Transformer-based TTS models to a Transformer-based VC model." ], [ "Seq2seq models are used to find a mapping between a source feature sequence $\\vec{x}_{1:n}=(\\vec{x}_1, \\cdots , \\vec{x}_n)$ and a target feature sequence $\\vec{y}_{1:m}=(\\vec{y}_1, \\cdots , \\vec{y}_m)$ which do not necessarily have to be of the same length, i.e, $n \\ne m$. Most seq2seq models have an encoder—decoder structure BIBREF4, where advanced ones are equipped with an attention mechanism BIBREF5, BIBREF6. First, an encoder ($\\text{Enc}$) maps $\\vec{x}_{1:n}$ into a sequence of hidden representations ${1:n}=(1, \\cdots , n)$. The decoding of the output sequence is autoregressive, which means that the previously generated symbols are considered an additional input at each decoding time step. To decode an output feature $\\vec{y}_t$, a weighted sum of ${1:n}$ first forms a context vector $\\vec{c}_t$, where the weight vector is represented by a calculated attention probability vector $\\vec{a}_t=(a^{(1)}_t, \\cdots , a^{(n)}_t)$. Each attention probability $a^{(k)}_t$ can be thought of as the importance of the hidden representation $k$ at the $t$th time step. Then the decoder ($\\text{Dec}$) uses the context vector $\\vec{c}$ and the previously generated features $\\vec{y}_{1:t-1}=(\\vec{y}_1, \\cdots , \\vec{y}_{t-1})$ to decode $\\vec{y}_t$. Note that both the calculation of the attention vector and the decoding process take the previous hidden state of the decoder $\\vec{q}_{t-1}$ as the input. The above-mentioned procedure can be formulated as follows: 1:n = Enc(x1:n),", "at = attention(qt-1, 1:n),", "ct = k=1n a(n)t k,", "yt , qt = Dec(y1:t-1, qt-1, ct). As pointed out in BIBREF27, BIBREF28, TTS and VC are similar since the output in both tasks is a sequence of acoustic features. In such seq2seq speech synthesis tasks, it is a common practice to employ a linear layer to further project the decoder output to the desired dimension. During training, the model is optimized via backpropagation using an L1 or L2 loss." ], [ "In this subsection we describe the Transformer-based TTS system proposed in BIBREF21, which we will refer to as Transformer-TTS. Transformer-TTS is a combination of the Transformer BIBREF16 architecture and the Tacotron 2 BIBREF29 TTS system.", "We first briefly introduce the Transformer model BIBREF16. The Transformer relies solely on a so-called multi-head self-attention module that learns sequential dependences by jointly attending to information from different representation subspaces. The main body of Transformer-TTS resembles the original Transformer architecture, which, as in any conventional seq2seq model, consists of an encoder stack and a decoder stack that are composed of $L$ encoder layers and $L$ decoder layers, respectively. An encoder layer contains a multi-head self-attention sublayer followed by a positionwise fully connected feedforward network. A decoder layer, in addition to the two sub-layers in the encoder layer, contains a third sub-layer, which performs multi-head attention over the output of the encoder stack. Each layer is equipped with residual connections and layer normalization. Finally, since no recurrent relation is employed, sinusoidal positional encoding BIBREF30 is added to the inputs of the encoder and decoder so that the model can be aware of information about the relative or absolute position of each element.", "The model architecture of Transformer-TTS is depicted in Figure FIGREF2. Since the Transformer architecture was originally designed for machine translation, several changes have been made to the architecture in BIBREF21 to make it compatible in the TTS task. First, as in Tacotron 2, prenets are added to the encoder and decoder sides. Since the text space and the acoustic feature space are different, the positional embeddings are employed with corresponding trainable weights to adapt to the scale of each space. In addition to the linear projection to predict the output acoustic feature, an extra linear layer is added to predict the stop token BIBREF29. A weighted binary cross-entropy loss is used so that the model can learn when to stop decoding. As a common practice in recent TTS models, a five-layer CNN postnet predicts a residual to refine the final prediction.", "In this work, our implementation is based on the open-source ESPnet-TTS BIBREF31, BIBREF26, where the encoder prenet is discarded and the guided attention loss is applied BIBREF22 to partial heads in partial decoder layers BIBREF17." ], [ "In this section we describe the combination of Transformer and seq2seq VC. Our proposed model, called the Voice Transformer Network (VTN), is largely based on Transformer-TTS introduced in Section SECREF6. Our model consumes the source log-mel spectrogram and outputs the converted log-mel spectrogram. As pointed out in Section SECREF5, TTS and VC respectively encode text and acoustic features to decode acoustic features. Therefore, we make a very simple modification to the TTS model, which is to replace the embedding lookup layer in the encoder with a linear projection layer, as shown in Figure FIGREF2. Although more complicated networks can be employed, we found that this simple design is sufficient to generate satisfying results. The rest of the model architecture as well as the training process remains the same as that for Transformer-TTS.", "An important trick we found to be useful here is to use a reduction factor in both the encoder and the decoder for accurate attention learning. In seq2seq TTS, since the time resolution of acoustic features is usually much larger than that of the text input, a reduction factor $r_d$ is commonly used on the decoder side BIBREF32, where multiple stacked frames are decoded at each time step. On the other hand, although the input and output of VC are both acoustic features, the high time resolution (about 100 frames per second) not only makes attention learning difficult but also increases the training memory footprint. While pyramid RNNs were used to reduce the time resolution in BIBREF10, here we simply introduce an encoder reduction factor $r_e$, where adjacent frames are stacked to reduce the time axis. We found that this not only leads to better attention alignment but also reduces the training memory footprint by half and subsequently the number of required gradient accumulation steps BIBREF26." ], [ "We present a text-to-speech pretraining technique that enables fast, sample-efficient training without introducing additional modification or loss to the original model structure or training loss. Assume that, in addition to a small, parallel VC dataset $\\vec{D}_{\\text{VC}}=\\lbrace \\vec{S}_{\\text{src}}, \\vec{S}_{\\text{trg}}\\rbrace $, access to a large single-speaker TTS corpus $\\vec{D}_{\\text{TTS}}=\\lbrace \\vec{T}_{\\text{TTS}}, \\vec{S}_{\\text{TTS}}\\rbrace $ is also available. $\\vec{S}_{\\text{src}}, \\vec{S}_{\\text{trg}}$ denote the source, target speech respectively, and $\\vec{T}_{\\text{TTS}}, \\vec{S}_{\\text{TTS}}$ denote the text and speech of the TTS speaker respectively. Our setup is highly flexible in that we do not require any of the speakers to be the same, nor any of the sentences between the VC and TTS corpus to be parallel. We employ a two-stage training procedure, where in the first stage we use $\\vec{D}_{\\text{TTS}}$ to learn the initial parameters as a prior, and then use $\\vec{D}_{\\text{VC}}$ to adapt to the VC model in the second stage. As argued in Section SECREF1, the ability to generate fine-grained hidden representations $\\vec{H}$ is the key to a good VC model, so our goal is to find a set of prior model parameters to train the final encoder $\\text{Enc}^{\\text{S}}_{\\text{VC}}$ and decoder $\\text{Dec}^{\\text{S}}_{\\text{VC}}$. The overall procedure is depicted in Figure FIGREF7." ], [ "The decoder pretraining is as simple as training a conventional TTS model using $\\vec{D}_{\\text{TTS}}$. Since text itself contains pure linguistic information, the text encoder $\\text{Enc}^{\\text{T}}_{\\text{TTS}}$ here is ensured to learn to encode an effective hidden representation that can be consumed by the decoder $\\text{Dec}^{\\text{S}}_{\\text{TTS}}$. Furthermore, by leveraging the large-scale corpus, the decoder is expected to be more robust by capturing various speech features, such as articulation and prosody." ], [ "A well pretrained encoder should be capable of encoding acoustic features into hidden representations that are recognizable by the pretrained decoder. With this goal in mind, we train an autoencoder whose decoder is the one pretrained in Section SECREF9 and kept fixed during training. The desired pretrained encoder $\\text{Enc}^{\\text{S}}_{\\text{TTS}}$ can be obtained by minimizing the reconstruction loss of $\\vec{S}_{\\text{TTS}}$. As the decoder pretraining process described in Section SECREF9 takes a hidden representation encoded from text as the input, fixing it in the encoder pretraining process guarantees the encoder to behave similarly to the text encoder $\\text{Enc}^{\\text{T}}_{\\text{TTS}}$, which is to extract fine-grained, linguistic-information-rich representations." ], [ "Finally, using $\\vec{D}_{\\text{VC}}$, we train the desired VC models, with the encoder and decoder initialized with $\\text{Enc}^{\\text{S}}_{\\text{TTS}}$ and $\\text{Dec}^{\\text{S}}_{\\text{TTS}}$ pretrained in Section SECREF10 and Section $\\ref {ssec:dpt}$, respectively. The pretrained model parameters serve as a very good prior to adapt to the relatively scarce VC data, as we will show later. Also, compared with training from scratch, the model takes less than half the training time to converge with the pretraining scheme, enabling extremely efficient training." ], [ "We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long.", "The entire implementation was carried out on the open-source ESPnet toolkit BIBREF26, BIBREF31, including feature extraction, training and benchmarking. We extracted 80-dimensional mel spectrograms with 1024 FFT points and a 256 point frame shift. The base settings for the TTS model and training follow the Transformer.v1 configuration in BIBREF26, and we made minimal modifications to it for VC. The reduction factors $r_e, r_d$ are both 2 in all VC models. For the waveform synthesis module, we used Parallel WaveGAN (PWG) BIBREF35, which is a non-autoregressive variant of the WaveNet vocoder BIBREF36, BIBREF37 and enables parallel, faster than real-time waveform generation. Since speaker-dependent neural vocoders outperform speaker-independent ones BIBREF38, we trained a speaker-dependent PWG by conditioning on natural mel spectrograms using the full training data of slt. Our goal here is to demonstrate the effectiveness of our proposed method, so we did not train separate PWGs for different training sizes of the TTS/VC model used, although target speaker adaptation with limited data in VC can be used BIBREF39.", "We carried out two types of objective evaluations between the converted speech and the ground truth: the mel cepstrum distortion (MCD), a commonly used measure of spectral distortion in VC, and the character error rate (CER) as well as the word error rate (WER), which estimate the intelligibility of the converted speech. We used the WORLD vocoder BIBREF2 to extract 24-dimensional mel cepstrum coefficients with a 5 ms frame shift, and calculated the distortion of nonsilent, time-aligned frame pairs. The ASR engine is based on the Transformer architecture BIBREF18 and is trained using the LibriSpeech dataset BIBREF40. The CER and WER for the ground-truth evaluation set of slt were 0.9% and 3.8%, respectively. We also reported the ASR results of the TTS model adapted on different sizes of slt training data in Table TABREF8, which can be regarded as upper bounds." ], [ "To evaluate the importance and the effectiveness of each pretraining scheme we proposed, we conducted a systematic comparison between different training processes and different sizes of training data. The objective results are in Table TABREF8. First, when the network was trained from scratch without any pretraining, the performance was not satisfactory even with the full training set. With decoder pretraining, a performance boost in MCD was obtained, whereas the ASR results were similar. Nonetheless, as we reduced the training size, the performance dropped dramatically, a similar trend to that reported in BIBREF12. Finally, by incorporating encoder pretraining, the model exhibited a significant improvement in all objective measures, where the effectiveness was robust against the reduction in the size of training data. Note that in the clb-slt conversion pair, our proposed method showed the potential to achieve extremely impressive ASR results comparable to the TTS upper bound." ], [ "Next, we compared our VTN model with an RNN-based seq2seq VC model called ATTS2S BIBREF8. This model is based on the Tacotron model BIBREF32 with the help of context preservation loss and guided attention loss to stabilize training and maintain linguistic consistency after conversion. We followed the configurations in BIBREF8 but used mel spectrograms instead of WORLD features.", "The objective evaluation results of the baseline are reported in Table TABREF8. For the different sizes of training data, our system not only consistently outperformed the baseline method but also remained robust, whereas the performance of the baseline method dropped dramatically as the size of training data was reduced. This proves that our proposed method can improve data efficiency as well as pronunciation. We also observed that when trained from scratch, our VTN model had a similar MCD and inferior ASR performance compared with the baseline. As the ATTS2S employed an extra mechanism to stabilize training, this result may indicate the superiority of using the Transformer architecture over RNNs. We leave rigorous investigation for future work.", "Systemwise subjective tests on naturalness and conversion similarity were also conducted to evaluate the perceptual performance. For naturalness, participants were asked to evaluate the naturalness of the speech by the mean opinion score (MOS) test on a five-point scale. For conversion similarity, each listener was presented a natural speech of the target speaker and a converted speech, and asked to judge whether they were produced by the same speaker with the confidence of the decision, i.e., sure or not sure. Ten non-native English speakers were recruited.", "Table TABREF14 shows the subjective results on the evaluation set. First, with the full training set, our proposed VTN model significantly outperformed the baseline ATTS2S by over one point for naturalness and 30% for similarity. Moreover, when trained with 80 utterances, our proposed method showed only a slight drop in performance, and was still superior to the baseline method. This result justifies the effectiveness of our method and also showed that the pretraining technique can greatly increase data efficiency without severe performance degradation.", "Finally, one interesting finding is that the VTN trained with the full training set also outperformed the adapted TTS model, while the VTN with limited data exhibited comparable performance. Considering that the TTS models in fact obtained good ASR results, we suspect that the VC-generated speech could benefit from encoding the prosody information from the source speech. In contrast, the lack of prosodic clues in the linguistic input in TTS reduced the naturalness of the generated speech." ], [ "In this work, we successfully applied the Transformer structure to seq2seq VC. Also, to address the problems of data efficiency and mispronunciation in seq2seq VC, we proposed the transfer of knowledge from easily accessible, large-scale TTS corpora by initializing the VC models with pretrained TTS models. A two-stage training strategy that pretrains the decoder and the encoder subsequently ensures that fine-grained intermediate representations are generated and fully utilized. Objective and subjective evaluations showed that our pretraining scheme can greatly improve speech intelligibility, and it significantly outperformed an RNN-based seq2seq VC baseline. Even with limited training data, our system can be successfully trained without significant performance degradation. In the future, we plan to more systematically examine the effectiveness of the Transformer architecture compared with RNN-based models. Extension of our pretraining methods to more flexible training conditions, such as nonparallel training BIBREF12, BIBREF27, is also an important future task." ], [ "This work was supported in part by JST PRESTO Grant Number JPMJPR1657 and JST CREST Grant Number JPMJCR19A3, Japan." ] ], "section_name": [ "Introduction", "Background ::: Sequence-to-sequence speech systhesis", "Background ::: Transformer-based text-to-speech synthesis", "Voice Transformer Network", "Proposed training strategy with text-to-speech pretraining", "Proposed training strategy with text-to-speech pretraining ::: Decoder pretraining", "Proposed training strategy with text-to-speech pretraining ::: Encoder pretraining", "Proposed training strategy with text-to-speech pretraining ::: VC model training", "Experimental evaluation ::: Experimental settings", "Experimental evaluation ::: Effectiveness of TTS pretraining", "Experimental evaluation ::: Comparison with baseline method", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "c1dd83c15f13f8cd920270da88880515d1be3d34" ], "answer": [ { "evidence": [ "We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long." ], "extractive_spans": [ "the CMU ARCTIC database BIBREF33", " the M-AILABS speech dataset BIBREF34 " ], "free_form_answer": "", "highlighted_evidence": [ "We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "3fc4a17783ea6c5246eac9fecfaef7894fea0ba8" ], "answer": [ { "evidence": [ "Next, we compared our VTN model with an RNN-based seq2seq VC model called ATTS2S BIBREF8. This model is based on the Tacotron model BIBREF32 with the help of context preservation loss and guided attention loss to stabilize training and maintain linguistic consistency after conversion. We followed the configurations in BIBREF8 but used mel spectrograms instead of WORLD features." ], "extractive_spans": [], "free_form_answer": "a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model", "highlighted_evidence": [ "Next, we compared our VTN model with an RNN-based seq2seq VC model called ATTS2S BIBREF8. This model is based on the Tacotron model BIBREF32 with the help of context preservation loss and guided attention loss to stabilize training and maintain linguistic consistency after conversion. We followed the configurations in BIBREF8 but used mel spectrograms instead of WORLD features." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "no", "no" ], "question": [ "What datasets are experimented with?", "What is the baseline model?" ], "question_id": [ "6ee27ab55b1f64783a9e72e3f83b7c9ec5cc8073", "bb4de896c0fa4bf3c8c43137255a4895f52abeef" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Model architecture of Transformer-TTS and VC.", "Figure 2: Illustration of proposed TTS pretraining technique for VC.", "Table 1: Validation-set objective evaluation results of adapted TTS, baseline (ATTS2S), and variants of the VTN trained on different sizes of data.", "Table 2: Evaluation-set subjective evaluation results with 95% confidence intervals for the ground truth, the TTS model, the baseline ATTS2S, and the proposed model, VTN. The numbers in the parentheses indicate the numbers of training utterances." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "5-Table2-1.png" ] }
[ "What is the baseline model?" ]
[ [ "1912.06813-Experimental evaluation ::: Comparison with baseline method-0" ] ]
[ "a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model" ]
493
1903.00172
Open Information Extraction from Question-Answer Pairs
Open Information Extraction (OpenIE) extracts meaningful structured tuples from free-form text. Most previous work on OpenIE considers extracting data from one sentence at a time. We describe NeurON, a system for extracting tuples from question-answer pairs. Since real questions and answers often contain precisely the information that users care about, such information is particularly desirable to extend a knowledge base with. NeurON addresses several challenges. First, an answer text is often hard to understand without knowing the question, and second, relevant information can span multiple sentences. To address these, NeurON formulates extraction as a multi-source sequence-to-sequence learning task, wherein it combines distributed representations of a question and an answer to generate knowledge facts. We describe experiments on two real-world datasets that demonstrate that NeurON can find a significant number of new and interesting facts to extend a knowledge base compared to state-of-the-art OpenIE methods.
{ "paragraphs": [ [ "This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition." ], [ "The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper." ], [ "Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection \"The First Page\" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section \"Length of Submission\" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.", "By uncommenting \\aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \\def\\aclpaperid{***} definition at the top.", "The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \\aclfinalcopy is commented out.", "The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline." ], [ "The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \\aclfinalcopy command in the document preamble.)", "Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ )." ], [ "NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings." ], [ "For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.", "Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.", "It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \\special{papersize=210mm,297mm} in the latex preamble (directly below the \\usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.", "Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible." ], [ "Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:", "Left and right margins: 2.5 cm", "Top margin: 2.5 cm", "Bottom margin: 2.5 cm", "Column width: 7.7 cm", "Column height: 24.7 cm", "Gap between columns: 0.6 cm", "Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible." ], [ "For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting", "\\usepackage{times}", "\\usepackage{latexsym}", "in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font." ], [ "Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.", "Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.", "The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.", "Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.", "Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.", "Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title." ], [ "Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.", "Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \\cite and the latter with \\shortcite or \\newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \\cite command, e.g., \\cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.", "We suggest that instead of", "“ BIBREF0 showed that ...”", "you use", "“Gusfield Gusfield:97 showed that ...”", "If you are using the provided and Bib style files, you can use the command \\citet (cite in text) to get “author (year)” citations.", "If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:", "\\usepackage[nohyperref]{naaclhlt2019}", "Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.", "As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.", "As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,", "“We previously showed BIBREF0 ...”", "should be avoided. Instead, use citations such as", "“ BIBREF0 Gusfield:97 previously showed ... ”", "Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.", "Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.", "References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \\bibliography commands near the end for more.", "Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .", "The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.", "Example citing an arxiv paper: BIBREF7 .", "Example article in journal citation: BIBREF8 .", "Example article in proceedings, with location: BIBREF9 .", "Example article in proceedings, without location: BIBREF10 .", "See corresponding .bib file for further details.", "Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.", "Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix." ], [ "Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line." ], [ "Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.", "Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments." ], [ "In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color." ], [ "It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”." ], [ "The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.", "NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix \"Appendices\" and Appendix \"Supplemental Material\" for further information.", "Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source." ], [ "The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.", "Preparing References:", "Include your own bib file like this: \\bibliographystyle{acl_natbib} \\begin{thebibliography}{40} ", "Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proc. ACL '15/IJCNLP '15, pages 344–354.", "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR '15.", "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI '07, pages 2670–2676.", "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. EMNLP '13, pages 1533–1544.", "Nikita Bhutani, HV Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proc. EMNLP '16, pages 55–64.", "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proc. NIPS '13, pages 2787–2795.", "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP '14, pages 1724–1734.", "Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proc. ACL '18, pages 407–413.", "Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.", "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP '11, pages 1535–1545.", "Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proc. NAACL-HLT '15, pages 851–861.", "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.", "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proc. ACL '17, pages 963–973.", "Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of-vocabulary entity pairs in matrix factorization for KB inference. In Proc. IJCAI '18, pages 4122–4129.", "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL '17 (System Demonstrations), pages 67–72.", "Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In Proc. ICLR '16.", "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. In Proc. EMNLP '15, pages 1412–1421.", "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using $t$ -SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.", "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learning for information extraction. In Proc. EMNLP '12, pages 523–534.", "Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proc. WWW '16, pages 625–635.", "Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proc. AAAI '16, pages 1955–1961.", "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP '14, pages 1532–1543.", "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proc. EMNLP '17, pages 338–348.", "Subhashree S and P Sreenivasa Kumar. 2018. Enriching domain ontologies using question-answer datasets. In Proc. CoDS-COMAD '18, pages 329–332.", "Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proc. ACL '17, pages 317–323.", "Denis Savenkov, Wei-Lwun Lu, Jeff Dalton, and Eugene Agichtein. 2015. Relation extraction from community generated question-answer pairs. In Proc. NAACL-HLT '15, pages 96–102.", "Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proc. EMNLP '16.", "Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proc. ACL '18, pages 885–895.", "Antonio Toral and Víctor M. Sánchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proc. EACL '17, pages 1063–1073.", "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS '15, pages 2692–2700.", "Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proc. ICDM '16, pages 489–498.", "Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.", "Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, and Jiawei Han. 2018. Indirect supervision for relation extraction using question-answer pairs. In Proc. WSDM '18, pages 646–654.", "Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proc. ACL '16, pages 1341–1350.", "Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proc. ICLR '17.", "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proc. NAACL-HLT '07 (Demonstrations), pages 25–26.", "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proc. ACL '17, pages 440–450.", "Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Cseq2seq: Cyclic sequence-to-sequence learning. arXiv preprint arXiv:1607.08725.", "Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-to-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312.", "Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proc. NAACL-HLT '16, pages 30–34.", "|", "where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \\appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. " ] ], "section_name": [ "Credits", "Introduction", "General Instructions", "The Ruler", "Electronically-available resources", "Format of Electronic Manuscript", "Layout", "Fonts", "The First Page", "Sections", "Footnotes", "Graphics", "Accessibility", "Translation of non-English Terms", "Length of Submission", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "7604914dc35858e65a74ebeee5787460f37cad9b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "evaluate" ], "unanswerable": true, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] }, { "annotation_id": [ "fef6cde23f81e9af4e7b8d8a40dfab85b7c1ea9f" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA.", "FLOAT SELECTED: Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset.", "FLOAT SELECTED: Table 1: Various types of training instances." ], "extractive_spans": [], "free_form_answer": "AmazonQA and ConciergeQA datasets", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA.", "FLOAT SELECTED: Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset.", "FLOAT SELECTED: Table 1: Various types of training instances." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] }, { "annotation_id": [ "40999743aae7fd603174b4459e3f94ece341ea43" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 1: Multi-Encoder, Constrained-Decoder model for tuple extraction from (q, a)." ], "extractive_spans": [], "free_form_answer": "Multi-Encoder, Constrained-Decoder model", "highlighted_evidence": [ "FLOAT SELECTED: Figure 1: Multi-Encoder, Constrained-Decoder model for tuple extraction from (q, a)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] }, { "annotation_id": [ "d278ea5f1958089253468398fdc0ce24de1d0555" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Various types of training instances." ], "extractive_spans": [], "free_form_answer": "ConciergeQA and AmazonQA", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Various types of training instances." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] }, { "annotation_id": [ "b2d5b5262b1bb8f56f049254d65a26b1ddb90af7" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "How did they evaluate the system?", "Where did they get training data?", "What extraction model did they use?", "Which datasets did they experiment on?", "What types of facts can be extracted from QA pairs that can't be extracted from general text?" ], "question_id": [ "0b24b5a652d674d4694668d889643bc1accf18ef", "1fb73176394ef59adfaa8fc7827395525f9a5af7", "3a3a65c65cebc2b8c267c334e154517d208adc7d", "d70ba6053e245ee4179c26a5dabcad37561c6af0", "802687121a98ba4d7df1f8040ea0dc1cc9565b69" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "information extraction", "information extraction", "information extraction", "information extraction", "information extraction" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Multi-Encoder, Constrained-Decoder model for tuple extraction from (q, a).", "Figure 2: State diagram for tag masking rules. V is the vocabulary including placeholder tags, T is the set of placeholder tags.", "Table 1: Various types of training instances.", "Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA.", "Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset.", "Figure 3: Example embedding vectors from question and answer encoders. Underlines denote similar embedding vectors in both the encoders.", "Table 6: Different errors N and B made by NEURON (N ) and NEURALOPENIE (B) respectively.", "Figure 4: Human-in-the-loop system for extending a domain-specific KB.", "Figure 5: Screenshot of the instructions and examples of the crowdsourced task." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Table1-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Figure3-1.png", "8-Table6-1.png", "8-Figure4-1.png", "13-Figure5-1.png" ] }
[ "Where did they get training data?", "What extraction model did they use?", "Which datasets did they experiment on?" ]
[ [ "1903.00172-6-Table3-1.png", "1903.00172-7-Table4-1.png", "1903.00172-5-Table1-1.png" ], [ "1903.00172-3-Figure1-1.png" ], [ "1903.00172-5-Table1-1.png" ] ]
[ "AmazonQA and ConciergeQA datasets", "Multi-Encoder, Constrained-Decoder model", "ConciergeQA and AmazonQA" ]
495
1908.02402
Flexibly-Structured Model for Task-Oriented Dialogues
This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot. The policy engine and language generation tasks are modeled jointly following that. The copy-augmented sequential decoder deals with new or unknown values in the conversation, while the multi-label decoder combined with the sequential decoder ensures the explicit assignment of values to slots. On the generation part, slot binary classifiers are used to improve performance. This architecture is scalable to real-world scenarios and is shown through an empirical evaluation to achieve state-of-the-art performance on both the Cambridge Restaurant dataset and the Stanford in-car assistant dataset\footnote{The code is available at \url{https://github.com/uber-research/FSDM}}
{ "paragraphs": [ [ "A traditional task-oriented dialogue system is often composed of a few modules, such as natural language understanding, dialogue state tracking, knowledge base (KB) query, dialogue policy engine and response generation. Language understanding aims to convert the input to some predefined semantic frame. State tracking is a critical component that models explicitly the input semantic frame and the dialogue history for producing KB queries. The semantic frame and the corresponding belief state are defined in terms of informable slots values and requestable slots. Informable slot values capture information provided by the user so far, e.g., {price=cheap, food=italian} indicating the user wants a cheap Italian restaurant at this stage. Requestable slots capture the information requested by the user, e.g., {address, phone} means the user wants to know the address and phone number of a restaurant. Dialogue policy model decides on the system action which is then realized by a language generation component.", "To mitigate the problems with such a classic modularized dialogue system, such as the error propagation between modules, the cascade effect that the updates of the modules have and the expensiveness of annotation, end-to-end training of dialogue systems was recently proposed BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . These systems train one whole model to read the current user's utterance, the past state (that may contain all previous interactions) and generate the current state and response.", "There are two main approaches for modeling the belief state in end-to-end task-oriented dialogue systems in the literature: the fully structured approach based on classification BIBREF7 , BIBREF9 , and the free-form approach based on text generation BIBREF10 . The fully structured approaches BIBREF11 , BIBREF12 use the full structure of the KB, both its schema and the values available in it, and assumes that the sets of informable slot values and requestable slots are fixed. In real-world scenarios, this assumption is too restrictive as the content of the KB may change and users' utterances may contain information outside the pre-defined sets. An ideal end-to-end architecture for state tracking should be able to identify the values of the informable slots and the requestable slots, easily adapt to new domains, to the changes in the content of the KB, and to the occurrence of words in users' utterances that are not present in the KB at training time, while at the same time providing the right amount of inductive bias to allow generalization. Recently, a free-form approach called TSCP (Two Stage Copy Net) BIBREF10 was proposed. This approach does not integrate any information about the KB in the model architecture. It has the advantage of being readily adaptable to new domains and changes in the content of the KB as well as solving the out-of-vocabulary word problem by generating or copying the relevant piece of text from the user's utterances in its response generation. However, TSCP can produce invalid states (see Section \"Experiments\" ). Furthermore, by putting all slots together into a sequence, it introduces an unwanted (artificial) order between different slots since they are encoded and decoded sequentially. It could be even worse if two slots have overlapping values, like departure and arrival airport in a travel booking system. As such, the unnecessary order of the slots makes getting rid of the invalid states a great challenge for the sequential decoder. As a summary, both approaches to state tracking have their weaknesses when applied to real-world applications.", "This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component of FSDM has the advantages of both fully structured and free-form approaches while addressing their shortcomings. On one hand, it is still structured, as it incorporates information about slots in KB schema; on the other hand, it is flexible, as it does not use information about the values contained in the KB records. This makes it easily adaptable to new values. These desirable properties are achieved by a separate decoder for each informable slot and a multi-label classifier for the requestable slots. Those components explicitly assign values to slots like the fully structured approach, while also preserving the capability of dealing with out-of-vocabulary words like the free-form approach. By using these two types of decoders, FSDM produces only valid belief states, overcoming the limitations of the free-form approach. Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section \"Methodology\" for details).", "The main contributions of this work are" ], [ "Our work is related to end-to-end task-oriented dialogue systems in general BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF14 , BIBREF7 , BIBREF8 and those that extend the Seq2Seq BIBREF15 architecture in particular BIBREF13 , BIBREF16 , BIBREF17 . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, BIBREF13 , BIBREF18 , BIBREF17 adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. BIBREF16 adopt Memory Networks BIBREF19 to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.", "Our work is also akin to modularly connected end-to-end trainable networks BIBREF7 , BIBREF9 , BIBREF0 , BIBREF4 , BIBREF3 , BIBREF20 . BIBREF7 includes belief state tracking and has two phases in training: the first phase uses belief state supervision, and then the second phase uses response generation supervision. BIBREF9 improves BIBREF7 by adding a policy network using latent representations so that the dialogue system can be continuously improved through reinforcement learning. These methods utilize classification as a way to decode the belief state.", " BIBREF10 decode the belief state as well as the response in a free-form fashion, but it tracks the informable slot values without an explicit assignment to an informable slot. Moreover, the arbitrary order in which informable slot values and requestable slots are encoded and decoded suggests that the sequential inductive bias the architecture provides may not be the right one.", "Other works BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 focus on the scalability of DST to large or changing vocabularies. BIBREF26 score a dynamically defined set of candidates as informable slot values. BIBREF27 addresses the problem of large vocabularies with a mix of rules and machine-learned classifiers." ], [ "We propose a fully-fledged task-oriented dialogue system called Flexibly-Structured Dialogue Model (FSDM), which operates at the turn level. Its overall architecture is shown in Figure 1 , which illustrates one dialogue turn. Without loss of generality, let us assume that we are on the $t$ -th turn of a dialogue. FSDM has three (3) inputs: agent response and belief state of the $t-1$ -th turn, and user utterance of the $t$ -th turn. It has two (2) outputs: the belief state for the $t$ -th turn that is used to query the KB, and the agent response of the $t$ -th turn based on the query result. As we can see, belief tracking is the key component that turns unstructured user utterance and the dialogue history into a KB-friendly belief state. The success of retrieving the correct KB result and further generating the correct response to complete a task relies on the quality of the produced belief state.", "FSDM contains five (5) components that work together in an end-to-end manner as follows: (1) The input is encoded and the last hidden state of the encoder serves as the initial hidden state of the belief state tracker and the response decoder; (2) Then, the belief state tracker generates a belief state $B_t = \\lbrace I_t, R_t\\rbrace $ , where $I_{t}$ is the set of constraints used for the KB query generated by the informable slots value decoder and $R_{t}$ is the user requested slots identified by the requestable slots multi-label classifier; (3) Given $I_t$ , the KB query component queries the KB and encodes the number of records returned in a one-hot vector $d_t$ ; (4) The response slot binary classifier predicts which slots should appear in the agent response $S_t$ ; (5) Finally, the agent response decoder takes in the KB output $d_t$ , a word copy probability vector $\\mathcal {P}^{c}$ computed from $I_t$ , $R_t$ , $I_{t}$0 together with an attention on hidden states of the input encoder and the belief decoders, and generates a response $I_{t}$1 ." ], [ "The input contains three parts: (1) the agent response $A_{t-1}$ , (2) the belief state $B_{t-1}$ from the $(t-1)$ -th turn and (3) the current user utterance $U_t$ . These parts are all text-based and concatenated, and then consumed by the input encoder. Specifically, the belief state $B_{t-1}$ is represented as a sequence of informable slot names with their respective values and requestable slot names. As an example, the sequence $\\langle $ cheap, end_price, italian, end_food, address, phone, end_belief $\\rangle $ indicates a state where the user informed cheap and Italian as KB query constraints and requested the address and phone number.", "The input encoder consists of an embedding layer followed by a recurrent layer with Gated Recurrent Units (GRU) BIBREF28 . It maps the input $A_{t-1} \\circ B_{t-1} \\circ U_{t}$ (where $\\circ $ denotes concatenation) to a sequence of hidden vectors $\\lbrace h^{E}_i| i = 1, \\dots , |A_{t-1} \\circ B_{t-1} \\circ U_{t}| \\rbrace $ so that $h^{E}_i = \\text{GRU}_H(e^{A_{t-1} \\circ B_{t-1} \\circ U_{t}})$ where $e$ is the embedding function that maps from words to vectors. The output of the input encoder is its last hidden state $h^{E}_{l}$ , which is served as the initial state for the belief state and response decoders as discussed next." ], [ "The belief state is composed of informable slot values $I_{t}$ and the requestable slots $R_{t}$ . We describe the generation of the former in this subsection and the latter in the next subsection.", "The informable slot values track information provided by the user and are used to query the KB. We allow each informable slot to have its own decoder to resolve the unwanted artificial dependencies among slot values introduced by TSCP BIBREF10 . As an example of artificial dependency, `italian; expensive' appears a lot in the training data. During testing, even when the gold informable value is `italian; moderate', the decoder may still generate `italian; expensive'. Modeling one decoder for each slot exactly associates the values with the corresponding informable slot.", "The informable slot value decoder consists of GRU recurrent layers with a copy mechanism as shown in the yellow section of Figure 1 . It is composed of weight-tied GRU generators that take the same initial hidden state $h^{E}_{l}$ , but have different start-of-sentence symbols for each unique informable slot. This way, each informable slot value decoder is dependent on the encoder's output, but it is also independent of the values generated for the other slots. Let $\\lbrace k^{I}\\rbrace $ denote the set of informable slots. The probability of the $j$ th word $P(y^{k^I}_j)$ being generated for the slot $k^I$ is calculated as follows: (1) calculate the attention with respect to the input encoded vectors to obtain the context vector $c^{k^I}_j$ , (2) calculate the generation score $\\phi _g(y^{k^I}_j)$ and the copy score $\\phi _c(y^{k^I}_j)$ based on the current step's hidden state $h^{k^I}_j$ , (3) calculate the probability using the copy mechanism: ", "$$\\small \\begin{split}\n&c^{k^I}_j = \\text{Attn}(h^{k^I}_{j-1}, \\lbrace h_{i}^E\\rbrace ),\\\\\n&h^{k^I}_j = \\text{GRU}_I\\Big ((c^{k^I}_j \\circ e^{y^{k^I}_{j}}), h^{k^I}_{j-1}\\Big ),\\\\\n&\\phi _g(y^{k^I}_j) = W_{g}^{K^I}\\cdot h^{k^I}_j,\\\\\n&\\phi _c(y^{k^I}_j) = \\text{tanh}(W_c^{K^I} \\cdot h^{y_j^{k^I}}) \\cdot h_j^{k^I} ,\\\\\n& y_j^{k^I} \\in A_{t-1} \\cup B_{t-1} \\cup U_t,\\\\\n&P(y^{k^I}_j|y^{k^I}_{j-1}, h^{k^I}_{j-1}) = \\text{Copy} \\Big ( \\phi _c(y^{k^I}_j), \\phi _g(y^{k^I}_j)\\Big ),\n\\end{split}$$ (Eq. 9) ", "where for each informable slot $k^I$ , $y_0^{k^I} = k^I$ and $h_0^{k^I} = h^{E}_{l}$ , $e^{y^{k^I}_{j}}$ is the embedding of the current input word (the one generated at the previous step), and $W_{g}^{K^I}$ and $W_{c}^{K^I}$ are learned weight matrices. We follow BIBREF29 and BIBREF30 for the copy $\\text{Copy}(\\cdot , \\cdot )$ and attention $\\text{Attn}(\\cdot , \\cdot )$ mechanisms implementation respectively.", "The loss for the informable slot values decoder is calculated as follows: ", "$$\\small \\begin{split}\n\\mathcal {L}^I =& - \\frac{1}{|\\lbrace k^I\\rbrace |} \\frac{1}{|Y^{k^I}|} \\sum _{k^I} \\sum _j \\\\\n&\\log P(y^{k^I}_j = z^{k^I}_j|y^{k^I}_{j-1}, h^{k^I}_{j-1}),\n\\end{split}$$ (Eq. 10) ", "where $Y^{K^I}$ is the sequence of informable slot value decoder predictions and $z$ is the ground truth label." ], [ "As the other part of a belief state, requestable slots are the attributes of KB entries that are explicitly requested by the user. We introduce a separate multi-label requestable slots classifier to perform binary classification for each slot. This greatly resolves the issues of TSCP that uses a single decoder with each step having unconstrained vocabulary-size choices, which may potentially lead to generating non-slot words. Similar to the informable slots decoders, such a separate classifier also eliminates the undesired dependencies among slots.", "Let $\\lbrace k^R\\rbrace $ denote the set of requestable slots. A single GRU cell is used to perform the classification. The initial state $h^{E}_{l}$ is used to pay attention to the input encoder hidden vectors to compute a context vector $c^{k^R}$ . The concatenation of $c^{k^R}$ and $e^{k^R}$ , the embedding vector of one requestable slot $k^R$ , is passed as input and $h^{E}_{l}$ as the initial state to the GRU. Finally, a sigmoid non-linearity is applied to the product of a weight vector $W_{y}^{R}$ and the output of the GRU $h^{k^R}$ to obtain $y^{k^R}$ , which is the probability of the slot being requested by the user. ", "$$\\small \\begin{split}\n&c^{k^R} = \\text{Attn}(h^{E}_{l}, \\lbrace h_{i}^E\\rbrace ),\\\\\n&h^{k^R} = \\text{GRU}_R\\Big ( (c^{k^R}\\circ e^{k^R}), h^{E}_{l} \\Big ),\\\\\n&y^{k^R} = \\sigma (W_{y}^{R} \\cdot h^{k^R}).\n\\end{split}$$ (Eq. 12) ", "The loss function for all requestable slot binary classifiers is: ", "$$\\small \\begin{split}\n\\mathcal {L}^R =& - \\frac{1}{|\\lbrace k^R\\rbrace |} \\sum _{k^R} \\\\\n&z^{k^R} \\log (y^{k^R}) + (1-z^{k^R}) \\log (1-y^{k^R}).\n\\end{split}$$ (Eq. 13) " ], [ "The generated informable slot values $I_t = \\lbrace Y^{k^I}\\rbrace $ are used as constraints of the KB query. The KB is composed of one or more relational tables and each entity is a record in one table. The query is performed to select a subset of the entities that satisfy those constraints. For instance, if the informable slots are {price=cheap, area=north}, all the restaurants that have attributes of those fields equal to those values will be returned. The output of this component, the one-hot vector $d_t$ , indicates the number of records satisfying the constraints. $d_t$ is a five-dimensional one-hot vector, where the first four dimensions represent integers from 0 to 3 and the last dimension represents 4 or more matched records. It is later used to inform the response slot binary classifier and the agent response decoder." ], [ "In order to incorporate all the relevant information about the retrieved entities into the response, FSDM introduces a new response slot binary classifier. Its inputs are requestable slots and KB queried result $d_t$ and the outputs are the response slots to appear in the agent response. Response slots are the slot names that are expected to appear in a de-lexicalized response (discussed in the next subsection). For instance, assume the requestable slot in the belief state is “address” and the KB query returned one candidate record. The response slot binary classifier may predict name_slot, address_slot and area_slot, which are expected to appear in an agent response as “name_slot is located in address_slot in the area_slot part of town”.", "The response slots $\\lbrace k^S\\rbrace $ map one-to-one to the requestable slots $\\lbrace k^R\\rbrace $ . The initial state of each response slot decoder is the last hidden state of the corresponding requestable slot decoder. In this case, the context vector $c^{k^S}$ is obtained by paying attention to all hidden vectors in the informable slot value decoders and requestable slots classifiers. Then, the concatenation of the context vector $c^{k^S}$ , the embedding vector of the response slot $e^{k^S}$ and the KB query vector $d_t$ are used as input to a single GRU cell. Finally, a sigmoid non-linearity is applied to the product of a weight vector $W_{y}^{S}$ and the output of the GRU $h^{k^S}$ to obtain a probability $y^{k^S}$ for each slot that is going to appear in the answer. ", "$$\\small \\begin{split}\n&c^{k^S} = \\text{Attn}(h^{k^R}, \\\\\n&\\lbrace h_{i}^{k^I}|k^I \\in K^I, i \\le |Y^{k^I}|\\rbrace \\cup \\lbrace h^{k^R}| k^R \\in K^R\\rbrace ), \\\\\n&h^{k^S} = \\text{GRU}_S\\Big ((c^{k^S} \\circ e^{k^S} \\circ d_t), h^{k^R}\\Big ),\\\\\n&y^{k^S} = \\sigma (W_{y}^{S} \\cdot h^{k^S}).\n\\end{split}$$ (Eq. 17) ", "The loss function for all response slot binary classifiers is: ", "$$\\small \\begin{split}\n\\mathcal {L}^S =& - \\frac{1}{|\\lbrace k^S\\rbrace |} \\sum _{k^S} \\\\\n&z^{k^S} \\log (y^{k^S}) + (1-z^{k^S}) \\log (1-y^{k^S}).\n\\end{split}$$ (Eq. 18) " ], [ "Lastly, we introduce the agent response decoder. It takes in the generated informable slot values, requestable slots, response slots, and KB query result and generates a (de-lexicalized) response. We adopt a copy-augmented decoder BIBREF29 as architecture. The canonical copy mechanism only takes a sequence of word indexes as inputs but does not accept the multiple Bernoulli distributions we obtain from sigmoid functions. For this reason, we introduce a vector of independent word copy probabilities $\\mathcal {P}^{C}$ , which is constructed as follows: ", "$$\\small \\mathcal {P^C}(w) = {\\left\\lbrace \\begin{array}{ll}\ny^{k^R}, & \\text{if } w = k^R,\\\\\ny^{k^S}, & \\text{if } w = k^S,\\\\\n1, & \\text{if } w \\in I_t,\\\\\n0, & \\text{otherwise},\n\\end{array}\\right.}$$ (Eq. 20) ", "where if a word $w$ is a requestable slot or a response slot, the probability is equal to their binary classifier output; if a word appears in the generated informable slot values, its probability is equal to 1; for the other words in the vocabulary the probability is equal to 0. This vector is used in conjunction with the agent response decoder prediction probability to generate the response.", "The agent response decoder is responsible for generating a de-lexicalized agent response. The response slots are substituted with the values of the results obtained by querying the KB before the response is returned to the user.", "Like the informable slot value decoder, the agent response decoder also uses a copy mechanism, so it has a copy probability and generation probability. Consider the generation of the $j$ th word. Its generation score $\\phi _g$ is calculated as: ", "$$\\small \\begin{split}\n&c^{A^E}_j = \\text{Attn}(h_{j-1}^A, \\lbrace h_i^E\\rbrace ), \\\\\n&c^{A^B}_j = \\text{Attn}(h_{j-1}^A, \\lbrace h_{i}^{k^I}|k^I \\in K^I, i \\le |Y^{k^I}|\\rbrace \\\\\n&\\cup \\lbrace h^{k^R}| k^R \\in K^R\\rbrace )\\cup \\lbrace h^{k^S}| k^S \\in K^S\\rbrace ),\\\\\n&h^{A}_j = \\text{GRU}_A\\Big ( (c^{A^E}_j \\circ c^{A^B}_j \\circ e^{A}_j \\circ d_t), h_{j-1}^A \\Big ),\\\\\n&\\phi _g(y^A_j) = W_{g}^{A} \\cdot h^{A}_j,\n\\end{split}$$ (Eq. 21) ", "where $c^{A^E}_j$ is a context vector obtained by attending to the hidden vectors of the input encoder, $c^{A^B}_j$ is a context vector obtained by attending to all hidden vectors of the informable slot value decoder, requestable slot classifier and response slot classifier, and $W_{g}^{A}$ is a learned weight matrix. The concatenation of the two context vectors $c^{A^E}_j$ and $c^{A^B}_j$ , the embedding vector $e^{A}_j$ of the previously generated word and the KB query output vector $d_t$ is used as input to a GRU. Note that the initial hidden state is $h_0^A = h^{E}_{l}$ . The copy score $\\phi _c$ is calculated as: ", "$$\\small \\phi _c(y_j^A) = {\\left\\lbrace \\begin{array}{ll}\n\\mathcal {P}^C(y_j^A) \\cdot \\text{tanh}(W_c^A \\cdot h^{y_j^A}) \\cdot h_j^A, &\\\\\n\\text{if } y_j^A \\in I_t \\cup K^R \\cup K^S,&\\\\\n\\mathcal {P}^C(y_j^A), \\text{otherwise},&\n\\end{array}\\right.}$$ (Eq. 22) ", "where $W_c^A$ is a learned weight matrix. The final probability is: ", "$$\\small P(y^{A}_j|y^{A}_{j-1}, h^{A}_{j-1}) = \\text{Copy}(\\phi _g(y^A_j), \\phi _c(y_j^A)).$$ (Eq. 23) ", "Let $z$ denote the ground truth de-lexicalized agent response. The loss for the agent response decoder is calculated as follows where $Y^A$ is the sequence of agent response decoder prediction: ", "$$\\small \\mathcal {L}^A = - \\frac{1}{|Y^{A}|} \\sum _j \\log P(y^{A}_j = z^{A}_j|y^{A}_{j-1}, h^{A}_{j-1}).$$ (Eq. 24) " ], [ "The loss function of the whole network is the sum of the four losses described so far for the informable slot values $\\mathcal {L}^I$ , requestable slot $\\mathcal {L}^R$ , response slot $\\mathcal {L}^S$ and agent response decoders $\\mathcal {L}^A$ , weighted by $\\alpha $ hyperparameters: ", "$$\\small \\mathcal {L} = \\alpha ^{I}\\mathcal {L}^I + \\alpha ^{R}\\mathcal {L}^R + \\alpha ^{S}\\mathcal {L}^S +\n\\alpha ^{A}\\mathcal {L}^A.$$ (Eq. 26) ", "The loss is optimized in an end-to-end fashion, with all modules trained simultaneously with loss gradients back-propagated to their weights. In order to do so, ground truth results from database queries are also provided to the model to compute the $d_t$ , while at prediction time results obtained by using the generated informable slot values $I_t$ are used." ], [ "We tested the FSDM on the Cambridge Restaurant dataset (CamRest) BIBREF7 and the Stanford in-car assistant dataset (KVRET) BIBREF13 described in Table 1 ." ], [ "We use NLTK BIBREF31 to tokenize each sentence. The user utterances are precisely the original texts, while all agent responses are de-lexicalized as described in BIBREF10 . We obtain the labels for the response slot decoder from the de-lexicalized response texts. We use 300-dimensional GloVe embeddings BIBREF32 trained on 840B words. Tokens not present in GloVe are initialized to be the average of all other embeddings plus a small amount of random noise to make them different from each other. We optimize both training and model hyperparameters by running Bayesian optimization over the product of validation set BLEU, EMR, and SuccF $_1$ using skopt. The model that performed the best on the validation set uses Adam optimizer BIBREF33 with a learning rate of 0.00025 for minimizing the loss in Equation 26 for both datasets. We apply dropout with a rate of 0.5 after the embedding layer, the GRU layer and any linear layer for CamRest and 0.2 for KVRET. The dimension of all hidden states is 128 for CamRest and 256 for KVRET. Loss weights $\\alpha ^I$ , $\\alpha ^R$ , $\\alpha ^S$ , $\\alpha ^A$ are 1.5, 9, 8, 0.5 respectively for CamRest and 1, 3, 2, 0.5 for KVRET." ], [ "We evaluate the performance concerning belief state tracking, response language quality, and task completion. For belief state tracking, we report precision, recall, and F $_1$ score of informable slot values and requestable slots. BLEU BIBREF34 is applied to the generated agent responses for evaluating language quality. Although it is a poor choice for evaluating dialogue systems BIBREF35 , we still report it in order to compare with previous work that has adopted it. For task completion evaluation, the Entity Match Rate (EMR) BIBREF7 and Success F $_1$ score (SuccF $_1$ ) BIBREF10 are reported. EMR evaluates whether a system can correctly retrieve the user's indicated entity (record) from the KB based on the generated constraints so it can have only a score of 0 or 1 for each dialogue. The SuccF $_1$ score evaluates how a system responds to the user's requests at dialogue level: it is F $_1$ score of the response slots in the agent responses." ], [ "We compare FSDM with four baseline methods and two ablations.", "NDM BIBREF7 proposes a modular end-to-end trainable network. It applies de-lexicalization on user utterances and responses.", "LIDM BIBREF9 improves over NDM by employing a discrete latent variable to learn underlying dialogue acts. This allows the system to be refined by reinforcement learning.", "KVRN BIBREF13 adopts a copy-augmented Seq2Seq model for agent response generation and uses an attention mechanism on the KB. It does not perform belief state tracking.", "TSCP/RL BIBREF10 is a two-stage CopyNet which consists of one encoder and two copy-mechanism-augmented decoders for belief state and response generation. TSCP includes further parameter tuning with reinforcement learning to increase the appearance of response slots in the generated response. We were unable to replicate the reported results using the provided code, hyperparameters, and random seed, so we report both the results from the paper and the average of 5 runs on the code with different random seeds (marked with $^\\dagger $ ).", "FSDM is the proposed method and we report two ablations: in FSDM/St the whole state tracking is removed (informable, requestable and response slots) and the answer is generated from the encoding of the input, while in FSDM/Res, only the response slot decoder is removed." ], [ "At the turn level, FSDM and FSDM/Res perform better than TSCP and TSCP/RL on belief state tracking, especially on requestable slots, as shown in Table 2 . FSDM and FSDM/Res use independent binary classifiers for the requestable slots and are capable of predicting the correct slots in all those cases. FSDM/Res and TSCP/RL do not have any additional mechanism for generating response slot, so FSDM/Res performing better than TSCP/RL shows the effectiveness of flexible-structured belief state tracker. Moreover, FSDM performs better than FSDM/Res, but TSCP performs worse than TSCP/RL. This suggests that using RL to increase the appearance of response slots in the response decoder does not help belief state tracking, but our response slot decoder does.", "FSDM performs better than all benchmarks on the dialogue level measures too, as shown in Table 3 , with the exception of BLEU score on KVRET, where it is still competitive. Comparing TSCP/RL and FSDM/Res, the flexibly-structured belief state tracker achieves better task completion than the free-form belief state tracker. Furthermore, FSDM performing better than FSDM/Res shows the effectiveness of the response slot decoder for task completion. The most significant performance improvement is obtained on CamRest by FSDM, confirming that the additional inductive bias helps to generalize from smaller datasets. More importantly, the experiment confirms that, although making weaker assumptions that are reasonable for real-world applications, FSDM is capable of performing at least as well as models that make stronger limiting assumptions which make them unusable in real-world applications." ], [ "We investigated the errors that both TSCP and FSDM make and discovered that the sequential nature of the TSCP state tracker leads to the memorization of common patterns that FSDM is not subject to. As an example (Table 4 ), TSCP often generates “date; party” as requestable slots even if only “party” and “time” are requested like in “what time is my next activity and who will be attending?” or if “party”, “time” and “date” are requested like in “what is the date and time of my next meeting and who will be attending it?”. FSDM produces correct belief states in these examples.", "FSDM misses some requestable slots in some conditions. For example, consider the user's utterance: “I would like their address and what part of town they are located in”. The ground-truth requestable slots are `address' and `area'. FSDM only predicts `address' and misses `area', which suggests that the model did not recognize `what part of town' as being a phrasing for requesting `area'. Another example is when the agent proposes “the name_SLOT is moderately priced and in the area_SLOT part of town . would you like their location ?” and the user replies “i would like the location and the phone number, please”. FSDM predicts `phone' as a requestable slot, but misses `address', suggesting it doesn't recognize the connection between `location' and `address'. The missing requestable slot issue may propagate to the agent response decoder. These issues may arise due to the use of fixed pre-trained embeddings and the single encoder. Using separate encoders for user utterance, agent response and dialogue history or fine-tuning the embeddings may solve the issue." ], [ "We propose the flexibly-structured dialogue model, a novel end-to-end architecture for task-oriented dialogue. It uses the structure in the schema of the KB to make architectural choices that introduce inductive bias and address the limitations of fully structured and free-form methods. The experiment suggests that this architecture is competitive with state-of-the-art models, while at the same time providing a more practical solution for real-world applications." ], [ "We would like to thank Alexandros Papangelis, Janice Lam, Stefan Douglas Webb and SIGDIAL reviewers for their valuable comments." ] ], "section_name": [ "Introduction", "Related Work", "Methodology", "Input Encoder", "Informable Slot Value Decoder", "Requestable Slot Binary Classifier", "Knowledge Base Query", "Response Slot Binary Classifier", "Word Copy Probability and Agent Response Decoder", "Loss Function", "Experiments", "Preprocessing and Hyper-parameters", "Evaluation Metrics", "Benchmarks", "Result Analysis", "Error Analysis", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "43b9fb24d884aaab3d7f1d0e58d0d3d2c38fbc2e" ], "answer": [ { "evidence": [ "This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component of FSDM has the advantages of both fully structured and free-form approaches while addressing their shortcomings. On one hand, it is still structured, as it incorporates information about slots in KB schema; on the other hand, it is flexible, as it does not use information about the values contained in the KB records. This makes it easily adaptable to new values. These desirable properties are achieved by a separate decoder for each informable slot and a multi-label classifier for the requestable slots. Those components explicitly assign values to slots like the fully structured approach, while also preserving the capability of dealing with out-of-vocabulary words like the free-form approach. By using these two types of decoders, FSDM produces only valid belief states, overcoming the limitations of the free-form approach. Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section \"Methodology\" for details)." ], "extractive_spans": [], "free_form_answer": "by adding extra supervision to generate the slots that will be present in the response", "highlighted_evidence": [ "Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section \"Methodology\" for details)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "ca2a4695129d0180768a955fb5910d639f79aa34" ] }, { "annotation_id": [ "40af5530589e68c54188bf4d191f6bfa96ed6601" ], "answer": [ { "evidence": [ "We compare FSDM with four baseline methods and two ablations.", "NDM BIBREF7 proposes a modular end-to-end trainable network. It applies de-lexicalization on user utterances and responses.", "LIDM BIBREF9 improves over NDM by employing a discrete latent variable to learn underlying dialogue acts. This allows the system to be refined by reinforcement learning.", "KVRN BIBREF13 adopts a copy-augmented Seq2Seq model for agent response generation and uses an attention mechanism on the KB. It does not perform belief state tracking.", "TSCP/RL BIBREF10 is a two-stage CopyNet which consists of one encoder and two copy-mechanism-augmented decoders for belief state and response generation. TSCP includes further parameter tuning with reinforcement learning to increase the appearance of response slots in the generated response. We were unable to replicate the reported results using the provided code, hyperparameters, and random seed, so we report both the results from the paper and the average of 5 runs on the code with different random seeds (marked with $^\\dagger $ )." ], "extractive_spans": [], "free_form_answer": "NDM, LIDM, KVRN, and TSCP/RL", "highlighted_evidence": [ "We compare FSDM with four baseline methods and two ablations.\n\nNDM BIBREF7 proposes a modular end-to-end trainable network. It applies de-lexicalization on user utterances and responses.\n\nLIDM BIBREF9 improves over NDM by employing a discrete latent variable to learn underlying dialogue acts. This allows the system to be refined by reinforcement learning.\n\nKVRN BIBREF13 adopts a copy-augmented Seq2Seq model for agent response generation and uses an attention mechanism on the KB. It does not perform belief state tracking.\n\nTSCP/RL BIBREF10 is a two-stage CopyNet which consists of one encoder and two copy-mechanism-augmented decoders for belief state and response generation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "ca2a4695129d0180768a955fb5910d639f79aa34" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "How do slot binary classifiers improve performance?", "What baselines have been used in this work?" ], "question_id": [ "f1bd66bb354e3dabf5dc4a71e6f08b17d472ecc9", "25fd61bb20f71051fe2bd866d221f87367e81027" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "" ], "topic_background": [ "research", "research" ] }
{ "caption": [ "Figure 1: FSDM architecture illustrated by a dialogue turn from the Cambridge Restaurant dataset with the following components: an input encoder (green), a belief state tracker (yellow for the informable slot values, orange for the requestable slots), a KB query component (purple), a response slot classifier (red), a component that calculates word copy probability (grey) and a response decoder (blue). Attention connections are not drawn for brevity.", "Table 1: Dataset", "Table 2: Turn-level performance results. Inf: Informable, Req: Requestable, P: Precision, R: Recall. Results marked with † are computed using available code, and all the other ones are reported from the original papers. ∗ indicates the improvement is statistically significant with p = 0.05.", "Table 4: Example of generated belief state and response for calendar scheduling domain", "Table 3: Dialogue level performance results. SuccF1: Success F1 score, EMR: Entity Match Rate. Results marked with † are computed using available code, and all the other ones are reported from the original papers. ∗ indicates the improvement is statistically significant with p = 0.05." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table4-1.png", "7-Table3-1.png" ] }
[ "How do slot binary classifiers improve performance?", "What baselines have been used in this work?" ]
[ [ "1908.02402-Introduction-3" ], [ "1908.02402-Benchmarks-2", "1908.02402-Benchmarks-0", "1908.02402-Benchmarks-4", "1908.02402-Benchmarks-1", "1908.02402-Benchmarks-3" ] ]
[ "by adding extra supervision to generate the slots that will be present in the response", "NDM, LIDM, KVRN, and TSCP/RL" ]
496
1601.02543
Evaluating the Performance of a Speech Recognition based System
Speech based solutions have taken center stage with growth in the services industry where there is a need to cater to a very large number of people from all strata of the society. While natural language speech interfaces are the talk in the research community, yet in practice, menu based speech solutions thrive. Typically in a menu based speech solution the user is required to respond by speaking from a closed set of words when prompted by the system. A sequence of human speech response to the IVR prompts results in the completion of a transaction. A transaction is deemed successful if the speech solution can correctly recognize all the spoken utterances of the user whenever prompted by the system. The usual mechanism to evaluate the performance of a speech solution is to do an extensive test of the system by putting it to actual people use and then evaluating the performance by analyzing the logs for successful transactions. This kind of evaluation could lead to dissatisfied test users especially if the performance of the system were to result in a poor transaction completion rate. To negate this the Wizard of Oz approach is adopted during evaluation of a speech system. Overall this kind of evaluations is an expensive proposition both in terms of time and cost. In this paper, we propose a method to evaluate the performance of a speech solution without actually putting it to people use. We first describe the methodology and then show experimentally that this can be used to identify the performance bottlenecks of the speech solution even before the system is actually used thus saving evaluation time and expenses.
{ "paragraphs": [ [ "There are several commercial menu based ASR systems available around the world for a significant number of languages and interestingly speech solution based on these ASR are being used with good success in the Western part of the globe BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Typically, a menu based ASR system restricts user to speak from a pre-defined closed set of words for enabling a transaction. Before commercial deployment of a speech solution it is imperative to have a quantitative measure of the performance of the speech solution which is primarily based on the speech recognition accuracy of the speech engine used. Generally, the recognition performance of any speech recognition based solution is quantitatively evaluated by putting it to actual use by the people who are the intended users and then analyzing the logs to identify successful and unsuccessful transactions. This evaluation is then used to identifying any further improvement in the speech recognition based solution to better the overall transaction completion rates. This process of evaluation is both time consuming and expensive. For evaluation one needs to identify a set of users and also identify the set of actual usage situations and perform the test. It is also important that the set of users are able to use the system with ease meaning that even in the test conditions the performance of the system, should be good, while this can not usually be guaranteed this aspect of keeping the user experience good makes it necessary to employ a wizard of Oz (WoZ) approach. Typically this requires a human agent in the loop during actual speech transaction where the human agent corrects any mis-recognition by actually listening to the conversation between the human user and the machine without the user knowing that there is a human agent in the loop. The use of WoZ is another expense in the testing a speech solution. All this makes testing a speech solution an expensive and time consuming procedure.", "In this paper, we describe a method to evaluate the performance of a speech solution without actual people using the system as is usually done. We then show how this method was adopted to evaluate a speech recognition based solution as a case study. This is the main contribution of the paper. The rest of the paper is organized as follows. The method for evaluation without testing is described in Section SECREF2 . In Section SECREF3 we present a case study and conclude in Section SECREF4 ." ], [ "Fig. FIGREF1 shows the schematic of a typical menu based speech solution having 3 nodes. At each node there are a set of words that the user is expected to speak and the system is supposed to recognize. In this particular schematic, at the entry node the user can speak any of the INLINEFORM0 words, namely INLINEFORM1 or INLINEFORM2 or INLINEFORM3 or INLINEFORM4 ; INLINEFORM5 is usually called the perplexity of the node in the speech literature. The larger the INLINEFORM6 the more the perplexity and higher the confusion and hence lower the recognition accuracies. In most commercial speech solutions the perplexity is kept very low, typically a couple of words. Once the word at the entry node has been recognized (say word INLINEFORM7 has been recognized), the system moves on to the second node where the active list of words to be recognized could be one of INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , ... INLINEFORM11 if the perplexity at the INLINEFORM12 node is INLINEFORM13 . This is carried on to the third node. A transaction is termed successful if and only if the recognition at each of the three nodes is correct. For example, typically in a banking speech solution the entry node could expect someone to speak among /credit card/, /savings account/, /current account/, /loan product/, /demat/, and /mutual fund transfer/ which has a perplexity of 6. Once a person speaks, say, /savings account/ and is recognized correctly by the system, at the second node it could be /account balance/ or /cheque/ or /last 5 transactions/ (perplexity 3) and at the third node (say, on recognition of /cheque/) it could be /new cheque book request/, /cheque status/, and /stop cheque request/ (perplexity 3). Though we will not dwell on this, it is important to note that an error in recognition at the entry node is more expensive than a recognition error at a lower node.", "Based on the call flow, and the domain the system can have several nodes for completion of a transaction. Typical menu based speech solutions strive for a 3 - 5 level nodes to make it usable. In any speech based solution (see Fig. FIGREF3 ) first the spoken utterance is hypothesized into a sequence of phonemes using the acoustic models. Since the phoneme recognition accuracy is low, instead of choosing one phoneme it identifies l-best (typically INLINEFORM0 ) matching phonemes. This phone lattice is then matched with all the expected words (language model) at that node to find the best match. For a node with perplexity INLINEFORM1 the constructed phoneme lattice of the spoken utterance is compared with the phoneme sequence representation of all the INLINEFORM2 words (through the lexicon which is one of he key components of a speech recognition system). The hypothesized phone lattice is declared one of the INLINEFORM3 words depending on the closeness of the phoneme lattice to the phoneme representation of the INLINEFORM4 words.", "We hypothesize that we can identify the performance of a menu based speech system by identifying the possible confusion among all the words that are active at a given node. If active words at a given node are phonetically similar it becomes difficult for the speech recognition system to distinguish them which in turn leads to recognition errors. We used Levenshtein distance BIBREF4 , BIBREF5 a well known measure to analyze and identify the confusion among the active words at a given node. This analysis gives a list of all set of words that have a high degree of confusability among them; this understanding can be then used to (a) restructure the set of active words at that node and/or (b) train the words that can be confused by using a larger corpus of speech data. This allows the speech recognition engine to be equipped to be able to distinguish the confusing words better. Actual use of this analysis was carried out for a speech solution developed for Indian Railway Inquiry System to identify bottlenecks in the system before its actual launch." ], [ "A schematic of a speech based Railway Information system, developed for Hindi language is shown in Fig. FIGREF4 . The system enables user to get information on five different services, namely, (a) Arrival of a given train at a given station, (b) Departure of a given train at a given station, (c) Ticket availability on a given date in a given train between two stations, and class, (d) Fare in a given class in a given train between two stations, and (e) PNR status. At the first recognition node (node-1), there are one or more active words corresponding to each of these services. For example, for selecting the service Fare, the user can speak among /kiraya jankari/, /kiraya/, /fare/. Similarly, for selecting service Ticket availability, user can speak /upalabdhata jankari/ or /ticket availability/ or /upalabdhata/. Generally the perplexity at a node is greater than on equal to the number of words that need to be recognized at that node. In this manner each of the services could have multiple words or phrases that can mean the same thing and the speaker could utter any of these words to refer to that service. The sum of all the possible different ways in which a service can be called ( INLINEFORM0 ) summed over all the 5 services gives the perplexity ( INLINEFORM1 ) at that node, namely, DISPLAYFORM0 ", "The speech recognition engine matches the phoneme lattice of the spoken utterance with all the INLINEFORM0 words which are active. The active word (one among the INLINEFORM1 words) with highest likelihood score is the recognized word. In order to avoid low likelihood recognitions a threshold is set so that even the best likelihood wordis returned only if the likelihood score is greater than the predefined threshold. Completion of a service requires recognitions at several nodes with different perplexity at each node. Clearly depending on the type of service that the user is wanting to use; the user has to go through different number of recognition nodes. For example, to complete the Arrival service it is required to pass through 3 recognition nodes namely (a) selection of a service, (b) selection of a train name and (c) selection of the railway station. While the perplexity (the words that are active) at the service selection node is fixed the perplexity at the station selection node could depend on the selection of the train name at an earlier node. For example, if the selected train stops at 23 stations, then the perplexity at the station selection node will be INLINEFORM2 .", "For confusability analysis at each of the node, we have used the Levenshtein distance BIBREF5 or the edit distance as is well known in computer science literature. We found that the utterances /Sahi/ and /Galat/ have 100% recognition. These words Sahi is represented by the string of phonemes in the lexicon as S AA HH I and the word Galat is represented as the phoneme sequence G L AX tT in the lexicon. We identified the edit distance between these two words Sahi and Galat and used that distance measure as the threshold that is able to differentiate any two words (say INLINEFORM0 ). So if the distance between any two active words at a given recognition node is lower than the threshold INLINEFORM1 , then there is a greater chance that those two active words could get confused (one word could be recognized as the other which is within a distance of INLINEFORM2 ). There are ways in which this possible misrecognition words could be avoided. The easiest way is to make sure that these two words together are not active at a given recognition node.", "Table TABREF6 shows the list of active word at the node 1 when the speech application was initially designed and Table TABREF7 shows the edit distance between all the active words at the node service given in Fig. FIGREF4 . The distance between words Sahi and Galat was found to be INLINEFORM0 which was set at the threshold, namely INLINEFORM1 . This threshold value was used to identify confusing active words. Clearly, as seen in the Table the distance between word pairs fare, pnr and pnr, prasthan is INLINEFORM2 and INLINEFORM3 respectively, which is very close to the threshold value of INLINEFORM4 . This can cause a high possibility that /fare/ may get recognized as pnr and vice-versa.", "One can derive from the analysis of the active words that fare and pnr can not coexist as active words at the same node. The result of the analysis was to remove the active words fare and pnr at that node.", "When the speech system was actually tested by giving speech samples, 17 out of 20 instances of /pnr/ was was recognized as fare and vice-versa. Similarly 19 out of 20 instances /pnr/ was misrecognized as prasthan and vice versa This confusion is expected as can be seen from the edit distance analysis of the active words in the Table TABREF7 . This modified active word list (removal of fare and pnr) increased the recognition accuracy at the service node (Fig. FIGREF4 ) by as much as 90%.", "A similar analysis was carried out at other recognition nodes and the active word list was suitably modified to avoid possible confusion between active word pair. This analysis and modification of the list of active words at a node resulted in a significant improvement in the transaction completion rate. We will present more experimental results in the final paper." ], [ "In this paper we proposed a methodology to identify words that could lead to confusion at any given node of a speech recognition based system. We used edit distance as the metric to identifying the possible confusion between the active words. We showed that this metric can be used effectively to enhance the performance of a speech solution without actually putting it to people test. There is a significant saving in terms of being able to identify recognition bottlenecks in a menu based speech solution through this analysis because it does not require actual people testing the system. This methodology was adopted to restructuring the set of active words at each node for better speech recognition in an actual menu based speech recognition system that caters to masses." ] ], "section_name": [ "Introduction", "Evaluation without Testing", "Case Study", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "40e8b561276384d47c4e6a0d3861d502b1fe37f3" ], "answer": [ { "evidence": [ "In this paper we proposed a methodology to identify words that could lead to confusion at any given node of a speech recognition based system. We used edit distance as the metric to identifying the possible confusion between the active words. We showed that this metric can be used effectively to enhance the performance of a speech solution without actually putting it to people test. There is a significant saving in terms of being able to identify recognition bottlenecks in a menu based speech solution through this analysis because it does not require actual people testing the system. This methodology was adopted to restructuring the set of active words at each node for better speech recognition in an actual menu based speech recognition system that caters to masses.", "We hypothesize that we can identify the performance of a menu based speech system by identifying the possible confusion among all the words that are active at a given node. If active words at a given node are phonetically similar it becomes difficult for the speech recognition system to distinguish them which in turn leads to recognition errors. We used Levenshtein distance BIBREF4 , BIBREF5 a well known measure to analyze and identify the confusion among the active words at a given node. This analysis gives a list of all set of words that have a high degree of confusability among them; this understanding can be then used to (a) restructure the set of active words at that node and/or (b) train the words that can be confused by using a larger corpus of speech data. This allows the speech recognition engine to be equipped to be able to distinguish the confusing words better. Actual use of this analysis was carried out for a speech solution developed for Indian Railway Inquiry System to identify bottlenecks in the system before its actual launch." ], "extractive_spans": [], "free_form_answer": "Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System.", "highlighted_evidence": [ "In this paper we proposed a methodology to identify words that could lead to confusion at any given node of a speech recognition based system. We used edit distance as the metric to identifying the possible confusion between the active words. ", "There is a significant saving in terms of being able to identify recognition bottlenecks in a menu based speech solution through this analysis because it does not require actual people testing the system. ", "Actual use of this analysis was carried out for a speech solution developed for Indian Railway Inquiry System to identify bottlenecks in the system before its actual launch." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "" ], "paper_read": [ "" ], "question": [ "what bottlenecks were identified?" ], "question_id": [ "8d793bda51a53a4605c1c33e7fd20ba35581a518" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "" ], "topic_background": [ "" ] }
{ "caption": [ "Fig. 1. Schematic of a typical menu based ASR system (Wn is spoken word).", "Fig. 2. A typical speech recognition system. In a menu based system the language model is typically the set of words that need to be recognized at a given node.", "Fig. 3. Call flow of Indian Railway Inquiry System (Wn is spoken word)", "Table 1. List of Active Words at node 1", "Table 2. Distance Measurement for Active Words at Node 1 of the Railway Inquiry System" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "6-Table1-1.png", "7-Table2-1.png" ] }
[ "what bottlenecks were identified?" ]
[ [ "1601.02543-Evaluation without Testing-2", "1601.02543-Conclusion-0" ] ]
[ "Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System." ]
499
1811.09786
Recurrently Controlled Recurrent Networks
Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture.
{ "paragraphs": [ [ "Recurrent neural networks (RNNs) live at the heart of many sequence modeling problems. In particular, the incorporation of gated additive recurrent connections is extremely powerful, leading to the pervasive adoption of models such as Gated Recurrent Units (GRU) BIBREF0 or Long Short-Term Memory (LSTM) BIBREF1 across many NLP applications BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In these models, the key idea is that the gating functions control information flow and compositionality over time, deciding how much information to read/write across time steps. This not only serves as a protection against vanishing/exploding gradients but also enables greater relative ease in modeling long-range dependencies.", "There are two common ways to increase the representation capability of RNNs. Firstly, the number of hidden dimensions could be increased. Secondly, recurrent layers could be stacked on top of each other in a hierarchical fashion BIBREF6 , with each layer's input being the output of the previous, enabling hierarchical features to be captured. Notably, the wide adoption of stacked architectures across many applications BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 signify the need for designing complex and expressive encoders. Unfortunately, these strategies may face limitations. For example, the former might run a risk of overfitting and/or hitting a wall in performance. On the other hand, the latter might be faced with the inherent difficulties of going deep such as vanishing gradients or difficulty in feature propagation across deep RNN layers BIBREF11 .", "This paper proposes Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and a general purpose neural building block for sequence modeling. RCRNs are characterized by its usage of two key components - a recurrent controller cell and a listener cell. The controller cell controls the information flow and compositionality of the listener RNN. The key motivation behind RCRN is to provide expressive and powerful sequence encoding. However, unlike stacked architectures, all RNN layers operate jointly on the same hierarchical level, effectively avoiding the need to go deeper. Therefore, RCRNs provide a new alternate way of utilizing multiple RNN layers in conjunction by allowing one RNN to control another RNN. As such, our key aim in this work is to show that our proposed controller-listener architecture is a viable replacement for the widely adopted stacked recurrent architecture.", "To demonstrate the effectiveness of our proposed RCRN model, we conduct extensive experiments on a plethora of diverse NLP tasks where sequence encoders such as LSTMs/GRUs are highly essential. These tasks include sentiment analysis (SST, IMDb, Amazon Reviews), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Experimental results show that RCRN outperforms BiLSTMs and multi-layered/stacked BiLSTMs on all 26 datasets, suggesting that RCRNs are viable replacements for the widely adopted stacked recurrent architectures. Additionally, RCRN achieves close to state-of-the-art performance on several datasets." ], [ "RNN variants such as LSTMs and GRUs are ubiquitous and indispensible building blocks in many NLP applications such as question answering BIBREF12 , BIBREF9 , machine translation BIBREF2 , entailment classification BIBREF13 and sentiment analysis BIBREF14 , BIBREF15 . In recent years, many RNN variants have been proposed, ranging from multi-scale models BIBREF16 , BIBREF17 , BIBREF18 to tree-structured encoders BIBREF19 , BIBREF20 . Models that are targetted at improving the internals of the RNN cell have also been proposed BIBREF21 , BIBREF22 . Given the importance of sequence encoding in NLP, the design of effective RNN units for this purpose remains an active area of research.", "Stacking RNN layers is the most common way to improve representation power. This has been used in many highly performant models ranging from speech recognition BIBREF7 to machine reading BIBREF9 . The BCN model BIBREF5 similarly uses multiple BiLSTM layers within their architecture. Models that use shortcut/residual connections in conjunctin with stacked RNN layers are also notable BIBREF11 , BIBREF14 , BIBREF10 , BIBREF23 .", "Notably, a recent emerging trend is to model sequences without recurrence. This is primarily motivated by the fact that recurrence is an inherent prohibitor of parallelism. To this end, many works have explored the possibility of using attention as a replacement for recurrence. In particular, self-attention BIBREF24 has been a popular choice. This has sparked many innovations, including general purpose encoders such as DiSAN BIBREF25 and Block Bi-DiSAN BIBREF26 . The key idea in these works is to use multi-headed self-attention and positional encodings to model temporal information.", "While attention-only models may come close in performance, some domains may still require the complex and expressive recurrent encoders. Moreover, we note that in BIBREF25 , BIBREF26 , the scores on multiple benchmarks (e.g., SST, TREC, SNLI, MultiNLI) do not outperform (or even approach) the state-of-the-art, most of which are models that still heavily rely on bidirectional LSTMs BIBREF27 , BIBREF20 , BIBREF5 , BIBREF10 . While self-attentive RNN-less encoders have recently been popular, our work moves in an orthogonal and possibly complementary direction, advocating a stronger RNN unit for sequence encoding instead. Nevertheless, it is also good to note that our RCRN model outperforms DiSAN in all our experiments.", "Another line of work is also concerned with eliminating recurrence. SRUs (Simple Recurrent Units) BIBREF28 are recently proposed networks that remove the sequential dependencies in RNNs. SRUs can be considered a special case of Quasi-RNNs BIBREF29 , which performs incremental pooling using pre-learned convolutional gates. A recent work, Multi-range Reasoning Units (MRU) BIBREF30 follows the same paradigm, trading convolutional gates with features learned via expressive multi-granular reasoning. BIBREF31 proposed sentence-state LSTMs (S-LSTM) that exchanges incremental reading for a single global state.", "Our work proposes a new way of enhancing the representation capability of RNNs without going deep. For the first time, we propose a controller-listener architecture that uses one recurrent unit to control another recurrent unit. Our proposed RCRN consistently outperforms stacked BiLSTMs and achieves state-of-the-art results on several datasets. We outperform above-mentioned competitors such as DiSAN, SRUs, stacked BiLSTMs and sentence-state LSTMs." ], [ "This section formally introduces the RCRN architecture. Our model is split into two main components - a controller cell and a listener cell. Figure FIGREF1 illustrates the model architecture." ], [ "The goal of the controller cell is to learn gating functions in order to influence the target cell. In order to control the target cell, the controller cell constructs a forget gate and an output gate which are then used to influence the information flow of the listener cell. For each gate (output and forget), we use a separate RNN cell. As such, the controller cell comprises two cell states and an additional set of parameters. The equations of the controller cell are defined as follows: i1t = s(W1ixt + U1ih1t-1 + b1i) and i2t = s(W2ixt + U2ih2t-1 + b2i)", "f1t = s(W1fxt + U1fh1t-1 + b1f) and f2t = s(W2fxt + U2fh2t-1 + b2f)", "o1t = s(W1oxt + U1oh1t-1 + b1o) and o2t = s(W2oxt + U2oh2t-1 + b2o)", "c1t = f1t c1t-1 + i1t (W1cxt + U1ch1t-1 + b1c)", "c2t = f2t c2t-1 + i2t (W2cxt + U2ch2t-1 + b2c)", "h1t = o1t (c1t) and h2t = o2t (c2t) where INLINEFORM0 is the input to the model at time step INLINEFORM1 . INLINEFORM2 are the parameters of the model where INLINEFORM3 and INLINEFORM4 . INLINEFORM5 is the sigmoid function and INLINEFORM6 is the tanh nonlinearity. INLINEFORM7 is the Hadamard product. The controller RNN has two cell states denoted as INLINEFORM8 and INLINEFORM9 respectively. INLINEFORM10 are the outputs of the unidirectional controller cell at time step INLINEFORM11 . Next, we consider a bidirectional adaptation of the controller cell. Let Equations ( SECREF2 - SECREF2 ) be represented by the function INLINEFORM12 , the bidirectional adaptation is represented as: h1t,h2t = CT(h1t-1, h2t-1, xt) t=1,", "h1t,h2t = CT(h1t+1, h2t+1, xt) t=M, 1", "h1t = [h1t; h1t] and h2t = [h2t; h2t] The outputs of the bidirectional controller cell are INLINEFORM0 for time step INLINEFORM1 . These hidden outputs act as gates for the listener cell." ], [ "The listener cell is another recurrent cell. The final output of the RCRN is generated by the listener cell which is being influenced by the controller cell. First, the listener cell uses a base recurrent model to process the sequence input. The equations of this base recurrent model are defined as follows: i3t = s(W3ixt + U3ih3t-1 + b3i)", "f3t = s(W3fxt + U3fh3t-1 + b3f)", "o3t = s(W3oxt + U3oh3t-1 + b3o)", "c3t = f3t c3t-1 + i3t (W3cxt + U3ch3t-1 + b3c)", "h3t = o3t (c3t) Similarly, a bidirectional adaptation is used, obtaining INLINEFORM0 . Next, using INLINEFORM1 (outputs of the controller cell), we define another recurrent operation as follows: c4t = s(h1t) c4t-1 + (1-s(h1t)) h3t", "h4t = h2t c3t where INLINEFORM0 and INLINEFORM1 are the cell and hidden states at time step INLINEFORM2 . INLINEFORM3 are the parameters of the listener cell where INLINEFORM4 . Note that INLINEFORM5 and INLINEFORM6 are the outputs of the controller cell. In this formulation, INLINEFORM7 acts as the forget gate for the listener cell. Likewise INLINEFORM8 acts as the output gate for the listener." ], [ "Intuitively, the overall architecture of the RCRN model can be explained as follows: Firstly, the controller cell can be thought of as two BiRNN models which hidden states are used as the forget and output gates for another recurrent model, i.e., the listener. The listener uses a single BiRNN model for sequence encoding and then allows this representation to be altered by listening to the controller. An alternative interpretation to our model architecture is that it is essentially a `recurrent-over-recurrent' model. Clearly, the formulation we have used above uses BiLSTMs as the atomic building block for RCRN. Hence, we note that it is also possible to have a simplified variant of RCRN that uses GRUs as the atomic block which we found to have performed slightly better on certain datasets.", "For efficiency purposes, we use the cuDNN optimized version of the base recurrent unit (LSTMs/GRUs). Additionally, note that the final recurrent cell (Equation ( SECREF3 )) can be subject to cuda-level optimization following simple recurrent units (SRU) BIBREF28 . The key idea is that this operation can be performed along the dimension axis, enabling greater parallelization on the GPU. For the sake of brevity, we refer interested readers to BIBREF28 . Note that this form of cuda-level optimization was also performed in the Quasi-RNN model BIBREF29 , which effectively subsumes the SRU model.", "Note that a single RCRN model is equivalent to a stacked BiLSTM of 3 layers. This is clear when we consider how two controller BiRNNs are used to control a single listener BiRNN. As such, for our experiments, when considering only the encoder and keeping all other components constant, 3L-BiLSTM has equal parameters to RCRN while RCRN and 3L-BiLSTM are approximately three times larger than BiLSTM." ], [ "This section discusses the overall empirical evaluation of our proposed RCRN model." ], [ "In order to verify the effectiveness of our proposed RCRN architecture, we conduct extensive experiments across several tasks in the NLP domain.", "Sentiment analysis is a text classification problem in which the goal is to determine the polarity of a given sentence/document. We conduct experiments on both sentence and document level. More concretely, we use 16 Amazon review datasets from BIBREF32 , the well-established Stanford Sentiment TreeBank (SST-5/SST-2) BIBREF33 and the IMDb Sentiment dataset BIBREF34 . All tasks are binary classification tasks with the exception of SST-5. The metric is the accuracy score.", "The goal of this task is to classify questions into fine-grained categories such as number or location. We use the TREC question classification dataset BIBREF35 . The metric is the accuracy score.", "This is a well-established and popular task in the field of natural language understanding and inference. Given two sentences INLINEFORM0 and INLINEFORM1 , the goal is to determine if INLINEFORM2 entails or contradicts INLINEFORM3 . We use two popular benchmark datasets, i.e., the Stanford Natural Language Inference (SNLI) corpus BIBREF36 , and SciTail (Science Entailment) BIBREF37 datasets. This is a pairwise classsification problem in which the metric is also the accuracy score.", "This is a standard problem in information retrieval and learning-to-rank. Given a question, the task at hand is to rank candidate answers. We use the popular WikiQA BIBREF38 and TrecQA BIBREF39 datasets. For TrecQA, we use the cleaned setting as denoted by BIBREF40 . The evaluation metrics are the MAP (Mean Average Precision) and Mean Reciprocal Rank (MRR) ranking metrics.", "This task involves reading documents and answering questions about these documents. We use the recent NarrativeQA BIBREF41 dataset which involves reasoning and answering questions over story summaries. We follow the original paper and report scores on BLEU-1, BLEU-4, Meteor and Rouge-L." ], [ "In this section, we describe the task-specific model architectures for each task.", "This architecture is used for all text classification tasks (sentiment analysis and question classification datasets). We use 300D GloVe BIBREF42 vectors with 600D CoVe BIBREF5 vectors as pretrained embedding vectors. An optional character-level word representation is also added (constructed with a standard BiGRU model). The output of the embedding layer is passed into the RCRN model directly without using any projection layer. Word embeddings are not updated during training. Given the hidden output states of the INLINEFORM0 dimensional RCRN cell, we take the concatenation of the max, mean and min pooling of all hidden states to form the final feature vector. This feature vector is passed into a single dense layer with ReLU activations of INLINEFORM1 dimensions. The output of this layer is then passed into a softmax layer for classification. This model optimizes the cross entropy loss. We train this model using Adam BIBREF43 and learning rate is tuned amongst INLINEFORM2 .", "This architecture is used for entailment tasks. This is a pairwise classification models with two input sequences. Similar to the singleton classsification model, we utilize the identical input encoder (GloVe, CoVE and character RNN) but include an additional part-of-speech (POS tag) embedding. We pass the input representation into a two layer highway network BIBREF44 of 300 hidden dimensions before passing into the RCRN encoder. The feature representation of INLINEFORM0 and INLINEFORM1 is the concatentation of the max and mean pooling of the RCRN hidden outputs. To compare INLINEFORM2 and INLINEFORM3 , we pass INLINEFORM4 into a two layer highway network. This output is then passed into a softmax layer for classification. We train this model using Adam and learning rate is tuned amongst INLINEFORM5 . We mainly focus on the encoder-only setting which does not allow cross sentence attention. This is a commonly tested setting on the SNLI dataset.", "This architecture is used for the ranking tasks (i.e., answer selection). We use the model architecture from Attentive Pooling BiLSTMs (AP-BiLSTM) BIBREF45 as our base and swap the RNN encoder with our RCRN encoder. The dimensionality is set to 200. The similarity scoring function is the cosine similarity and the objective function is the pairwise hinge loss with a margin of INLINEFORM0 . We use negative sampling of INLINEFORM1 to train our model. We train our model using Adadelta BIBREF46 with a learning rate of INLINEFORM2 .", "We use R-NET BIBREF9 as the base model. Since R-NET uses three Bidirectional GRU layers as the encoder, we replaced this stacked BiGRU layer with RCRN. For fairness, we use the GRU variant of RCRN instead. The dimensionality of the encoder is set to 75. We train both models using Adam with a learning rate of INLINEFORM0 .", "For all datasets, we include an additional ablative baselines, swapping the RCRN with (1) a standard BiLSTM model and (2) a stacked BiLSTM of 3 layers (3L-BiLSTM). This is to fairly observe the impact of different encoder models based on the same overall model framework." ], [ "This section discusses the overall results of our experiments.", "On the 16 review datasets (Table TABREF22 ) from BIBREF32 , BIBREF31 , our proposed RCRN architecture achieves the highest score on all 16 datasets, outperforming the existing state-of-the-art model - sentence state LSTMs (SLSTM) BIBREF31 . The macro average performance gain over BiLSTMs ( INLINEFORM0 ) and Stacked (2 X BiLSTM) ( INLINEFORM1 ) is also notable. On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets.", "Results on SST-5 (Table TABREF22 ) and SST-2 (Table TABREF22 ) are also promising. More concretely, our RCRN architecture achieves state-of-the-art results on SST-5 and SST-2. RCRN also outperforms many strong baselines such as DiSAN BIBREF25 , a self-attentive model and Bi-Attentive classification network (BCN) BIBREF5 that also use CoVe vectors. On SST-2, strong baselines such as Neural Semantic Encoders BIBREF53 and similarly the BCN model are also outperformed by our RCRN model.", "Finally, on the IMDb sentiment classification dataset (Table TABREF25 ), RCRN achieved INLINEFORM0 accuracy. Our proposed RCRN outperforms Residual BiLSTMs BIBREF14 , 4-layered Quasi Recurrent Neural Networks (QRNN) BIBREF29 and the BCN model which can be considered to be very competitive baselines. RCRN also outperforms ablative baselines BiLSTM ( INLINEFORM1 ) and 3L-BiLSTM ( INLINEFORM2 ).", "Our results on the TREC question classification dataset (Table TABREF25 ) is also promising. RCRN achieved a state-of-the-art score of INLINEFORM0 on this dataset. A notable baseline is the Densely Connected BiLSTM BIBREF23 , a deep residual stacked BiLSTM model which RCRN outperforms ( INLINEFORM1 ). Our model also outperforms BCN (+0.4%) and SRU ( INLINEFORM2 ). Our ablative BiLSTM baselines achieve reasonably high score, posssibly due to CoVe Embeddings. However, our RCRN can further increase the performance score.", "Results on entailment classification are also optimistic. On SNLI (Table TABREF26 ), RCRN achieves INLINEFORM0 accuracy, which is competitive to Gumbel LSTM. However, RCRN outperforms a wide range of baselines, including self-attention based models as multi-head BIBREF24 and DiSAN BIBREF25 . There is also performance gain of INLINEFORM1 over Bi-SRU even though our model does not use attention at all. RCRN also outperforms shortcut stacked encoders, which use a series of BiLSTM connected by shortcut layers. Post review, as per reviewer request, we experimented with adding cross sentence attention, in particular adding the attention of BIBREF61 on 3L-BiLSTM and RCRN. We found that they performed comparably (both at INLINEFORM2 ). We did not have resources to experiment further even though intuitively incorporating different/newer variants of attention BIBREF65 , BIBREF63 , BIBREF13 and/or ELMo BIBREF50 can definitely raise the score further. However, we hypothesize that cross sentence attention forces less reliance on the encoder. Therefore stacked BiLSTMs and RCRNs perform similarly.", "The results on SciTail similarly show that RCRN is more effective than BiLSTM ( INLINEFORM0 ). Moreover, RCRN outperforms several baselines in BIBREF37 including models that use cross sentence attention such as DecompAtt BIBREF61 and ESIM BIBREF13 . However, it still falls short to recent state-of-the-art models such as OpenAI's Generative Pretrained Transformer BIBREF64 .", "Results on the answer selection (Table TABREF26 ) task show that RCRN leads to considerable improvements on both WikiQA and TrecQA datasets. We investigate two settings. The first, we reimplement AP-BiLSTM and swap the BiLSTM for RCRN encoders. Secondly, we completely remove all attention layers from both models to test the ability of the standalone encoder. Without attention, RCRN gives an improvement of INLINEFORM0 on both datasets. With attentive pooling, RCRN maintains a INLINEFORM1 improvement in terms of MAP score. However, the gains on MRR are greater ( INLINEFORM2 ). Notably, AP-RCRN model outperforms the official results reported in BIBREF45 . Overall, we observe that RCRN is much stronger than BiLSTMs and 3L-BiLSTMs on this task.", "Results (Table TABREF26 ) show that enhancing R-NET with RCRN can lead to considerable improvements. This leads to an improvement of INLINEFORM0 on all four metrics. Note that our model only uses a single layered RCRN while R-NET uses 3 layered BiGRUs. This empirical evidence might suggest that RCRN is a better way to utilize multiple recurrent layers.", "Across all 26 datasets, RCRN outperforms not only standard BiLSTMs but also 3L-BiLSTMs which have approximately equal parameterization. 3L-BiLSTMs were overall better than BiLSTMs but lose out on a minority of datasets. RCRN outperforms a wide range of competitive baselines such as DiSAN, Bi-SRUs, BCN and LSTM-CNN, etc. We achieve (close to) state-of-the-art performance on SST, TREC question classification and 16 Amazon review datasets." ], [ "This section aims to get a benchmark on model performance with respect to model efficiency. In order to do that, we benchmark RCRN along with BiLSTMs and 3 layered BiLSTMs (with and without cuDNN optimization) on different sequence lengths (i.e., INLINEFORM0 ). We use the IMDb sentiment task. We use the same standard hardware (a single Nvidia GTX1070 card) and an identical overarching model architecture. The dimensionality of the model is set to 200 with a fixed batch size of 32. Finally, we also benchmark a CUDA optimized adaptation of RCRN which has been described earlier (Section SECREF4 ).", "Table TABREF32 reports training/inference times of all benchmarked models. The fastest model is naturally the 1 layer BiLSTM (cuDNN). Intuitively, the speed of RCRN should be roughly equivalent to using 3 BiLSTMs. Surprisingly, we found that the cuda optimized RCRN performs consistently slightly faster than the 3 layer BiLSTM (cuDNN). At the very least, RCRN provides comparable efficiency to using stacked BiLSTM and empirically we show that there is nothing to lose in this aspect. However, we note that cuda-level optimizations have to be performed. Finally, the non-cuDNN optimized BiLSTM and stacked BiLSTMs are also provided for reference." ], [ "We proposed Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and encoder for a myriad of NLP tasks. RCRN operates in a novel controller-listener architecture which uses RNNs to learn the gating functions of another RNN. We apply RCRN to a potpourri of NLP tasks and achieve promising/highly competitive results on all tasks and 26 benchmark datasets. Overall findings suggest that our controller-listener architecture is more effective than stacking RNN layers. Moreover, RCRN remains equally (or slightly more) efficient compared to stacked RNNs of approximately equal parameterization. There are several potential interesting directions for further investigating RCRNs. Firstly, investigating RCRNs controlling other RCRNs and secondly, investigating RCRNs in other domains where recurrent models are also prevalent for sequence modeling. The source code of our model can be found at https://github.com/vanzytay/NIPS2018_RCRN." ], [ "We thank the anonymous reviewers and area chair from NIPS 2018 for their constructive and high quality feedback." ] ], "section_name": [ "Introduction", "Related Work", "Recurrently Controlled Recurrent Networks (RCRN)", "Controller Cell", "Listener Cell", "Overall RCRN Architecture, Variants and Implementation", "Experiments", "Tasks and Datasets", "Task-Specific Model Architectures and Implementation Details", "Overall Results", "Runtime Analysis", "Conclusion and Future Directions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "418f10bc1c557b7218dd99a90693696afa6b9c7d" ], "answer": [ { "evidence": [ "On the 16 review datasets (Table TABREF22 ) from BIBREF32 , BIBREF31 , our proposed RCRN architecture achieves the highest score on all 16 datasets, outperforming the existing state-of-the-art model - sentence state LSTMs (SLSTM) BIBREF31 . The macro average performance gain over BiLSTMs ( INLINEFORM0 ) and Stacked (2 X BiLSTM) ( INLINEFORM1 ) is also notable. On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets." ], "extractive_spans": [], "free_form_answer": "Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets.", "highlighted_evidence": [ "On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "c2dd23ae369e2d6a7fe890acd717c3d28c8f6a12" ], "answer": [ { "evidence": [ "Across all 26 datasets, RCRN outperforms not only standard BiLSTMs but also 3L-BiLSTMs which have approximately equal parameterization. 3L-BiLSTMs were overall better than BiLSTMs but lose out on a minority of datasets. RCRN outperforms a wide range of competitive baselines such as DiSAN, Bi-SRUs, BCN and LSTM-CNN, etc. We achieve (close to) state-of-the-art performance on SST, TREC question classification and 16 Amazon review datasets." ], "extractive_spans": [ "approximately equal parameterization" ], "free_form_answer": "", "highlighted_evidence": [ "Across all 26 datasets, RCRN outperforms not only standard BiLSTMs but also 3L-BiLSTMs which have approximately equal parameterization." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "By how much do they outperform BiLSTMs in Sentiment Analysis?", "Does their model have more parameters than other models?" ], "question_id": [ "602396d1f5a3c172e60a10c7022bcfa08fa6cbc9", "b984612ceac5b4cf5efd841af2afddd244ee497a" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: High level overview of our proposed RCRN architecture.", "Table 1: Results on the Amazon Reviews dataset. † are models implemented by us.", "Table 4: Results on IMDb binary sentiment clasification.", "Table 6: Results on SNLI dataset.", "Table 10: Training and Inference times on IMDb binary sentiment classification task with varying sequence lengths." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "7-Table4-1.png", "7-Table6-1.png", "9-Table10-1.png" ] }
[ "By how much do they outperform BiLSTMs in Sentiment Analysis?" ]
[ [ "1811.09786-Overall Results-1" ] ]
[ "Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets." ]
505
1704.05907
End-to-End Multi-View Networks for Text Classification
We propose a multi-view network for text classification. Our method automatically creates various views of its input text, each taking the form of soft attention weights that distribute the classifier's focus among a set of base features. For a bag-of-words representation, each view focuses on a different subset of the text's words. Aggregating many such views results in a more discriminative and robust representation. Through a novel architecture that both stacks and concatenates views, we produce a network that emphasizes both depth and width, allowing training to converge quickly. Using our multi-view architecture, we establish new state-of-the-art accuracies on two benchmark tasks.
{ "paragraphs": [ [ "State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader building up an increasingly refined understanding of the content. In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision.", "More precisely, the proposed Multi-View Network (MVN) for text classification learns to generate several views of its input text. Each view is formed by focusing on different sets of words through a view-specific attention mechanism. These views are arranged sequentially, so each subsequent view can build upon or deviate from previous views as appropriate. The final representation that concatenates these diverse views should be more robust to noise than any one of its components. Furthermore, different sentences may look similar under one view but different under another, allowing the network to devote particular views to distinguishing between subtle differences in sentences, resulting in more discriminative representations.", "Unlike existing multi-view neural network approaches for image processing BIBREF1 , BIBREF2 , where multiple views are provided as part of the input, our MVN learns to automatically create views from its input text by focusing on different sets of words. Compared to deep Convolutional Networks (CNN) for text BIBREF3 , BIBREF0 , the MVN strategy emphasizes network width over depth. Shorter connections between each view and the loss function enable better gradient flow in the networks, which makes the system easier to train. Our use of multiple views is similar in spirit to the weak learners used in ensemble methods BIBREF4 , BIBREF5 , BIBREF6 , but our views produce vector-valued intermediate representations instead of classification scores, and all our views are trained jointly with feedback from the final classifier.", "Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets." ], [ "The MVN architecture is depicted in Figure FIGREF1 . First, individual selection vectors INLINEFORM0 are created, each formed by a distinct softmax weighted sum over the word vectors of the input text. Next, these selections are sequentially transformed into views INLINEFORM1 , with each view influencing the views that come after it. Finally, all views are concatenated and fed into a two-layer perceptron for classification." ], [ "Each selection INLINEFORM0 is constructed by focusing on a different subset of words from the original text, as determined by a softmax weighted sum BIBREF8 . Given a piece of text with INLINEFORM1 words, we represent it as a bag-of-words feature matrix INLINEFORM2 INLINEFORM3 . Each row of the matrix corresponds to one word, which is represented by a INLINEFORM4 -dimensional vector, as provided by a learned word embedding table. The selection INLINEFORM5 for the INLINEFORM6 view is the softmax weighted sum of features: DISPLAYFORM0 ", " where the weight INLINEFORM0 is computed by: DISPLAYFORM0 DISPLAYFORM1 ", " here, INLINEFORM0 (a vector) and INLINEFORM1 (a matrix) are learned selection parameters. By varying the weights INLINEFORM2 , the selection for each view can focus on different words from INLINEFORM3 , as illustrated by different color curves connecting to INLINEFORM4 in Figure FIGREF1 ." ], [ "Having built one INLINEFORM0 for each of our INLINEFORM1 views, the actual views are then created as follows: DISPLAYFORM0 ", " where INLINEFORM0 are learned parameter matrices, and INLINEFORM1 represents concatenation. The first and last views are formed by solely INLINEFORM2 ; however, they play very different roles in our network. INLINEFORM3 is completely disconnected from the others, an independent attempt at good feature selection, intended to increase view diversity BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Conversely, INLINEFORM4 forms the base of a structure similar to a multi-layer perceptron with short-cutting, as defined by the recurrence in Equation EQREF7 . Here, the concatenation of all previous views implements short-cutting, while the recursive definition of each view implements stacking, forming a deep network depicted by horizontal arrows in Figure FIGREF1 . This structure makes each view aware of the information in those previous to it, allowing them to build upon each other. Note that the INLINEFORM5 matrices are view-specific and grow with each view, making the overall parameter count quadratic in the number of views." ], [ "The final step is to transform our views into a classification of the input text. The MVN does so by concatenating its view vectors, which are then fed into a fully connected projection followed by a softmax function to produce a distribution over the possible classes. Dropout regularization BIBREF13 can be applied at this softmax layer, as in BIBREF14 ." ], [ "The MVN's selection layer operates on a matrix of feature vectors INLINEFORM0 , which has thus far corresponded to a bag of word vectors. Each view's selection makes intuitive sense when features correspond to words, as it is easy to imagine different readers of a text focusing on different words, with each reader arriving at a useful interpretation. However, there is a wealth of knowledge on how to construct powerful feature representations for text, such as those used by convolutional neural networks (CNNs). To demonstrate the utility of having views that weight arbitrary feature vectors, we augment our bag-of-words representation with vectors built by INLINEFORM1 -gram filters max-pooled over the entire text BIBREF14 , with one feature vector for each INLINEFORM2 -gram order, INLINEFORM3 . The augmented INLINEFORM4 matrix has INLINEFORM5 rows. Unlike our word vectors, the 4 CNN vectors each provide representations of the entire text. Returning to our reader analogy, one could imagine these to correspond to quick ( INLINEFORM6 ) or careful ( INLINEFORM7 ) skims of the text. Regardless of whether a feature vector is built by embedding table or by max-pooled INLINEFORM8 -gram filters, we always back-propagate through all feature construction layers, so they become specialized to our end task." ], [ "The Stanford Sentiment Treebank contains 11,855 sentences from movie reviews. We use the same splits for training, dev, and test data as in BIBREF7 to predict the fine-grained 5-class sentiment categories of the sentences. For comparison purposes, following BIBREF14 , BIBREF15 , BIBREF16 , we train the models using both phrases and sentences, but only evaluate sentences at test time.", "We initialized all of the word embeddings BIBREF17 , BIBREF18 using the publicly available 300 dimensional pre-trained vectors from GloVe BIBREF19 . We learned 8 views with 200 dimensions each, which requires us to project the 300 dimensional word vectors, which we implemented using a linear transformation, whose weight matrix and bias term are shared across all words, followed by a INLINEFORM0 activation. For optimization, we used Adadelta BIBREF20 , with a starting learning rate of 0.0005 and a mini-batch of size 50. Also, we used dropout (with a rate of 0.2) to avoid overfitting. All of these MVN hyperparameters were determined through experiments measuring validation-set accuracy.", "The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 . The results indicate that the bag-of-words MVN outperforms most methods, but obtains lower accuracy than the state-of-the-art results achieved by the tree-LSTM BIBREF21 , BIBREF22 and the high-order CNN BIBREF16 . However, when augmented with 4 convolutional features as described in Section SECREF9 , the MVN strategy surpasses both of these, establishing a new state-of-the-art on this benchmark.", "In Figure FIGREF12 , we present the test-set accuracies obtained while varying the number of views in our MVN with convolutional features. These results indicate that better predictive accuracy can be achieved while increasing the number of views up to eight. After eight, the accuracy starts to drop. The number of MVN views should be tuned for each new application, but it is good to see that not too many views are required to achieve optimal performance on this task.", "To better understand the benefits of the MVN method, we further analyzed the eight views constructed by our best model. After training, we obtained the view representation vectors for both the training and testing data, and then independently trained a very simple, but fast and stable Naïve Bayes classifier BIBREF23 for each view. We report class-specific F-measures for each view in Figure FIGREF13 . From this figure, we can observe that different views focus on different target classes. For example, the first two views perform poorly on the 0 (very negative) and 1 (negative) classes, but achieve the highest F-measures on the 2 (neutral) class. Meanwhile, the non-neutral classes each have a different view that achieves the highest F-measure. This suggests that some views have specialized in order to better separate subsets of the training data.", "We provide an ablation study in Table TABREF14 . First, we construct a traditional ensemble model. We independently train eight MVN models, each with a single view, to serve as weak learners. We have them vote with equal weight for the final classification, obtaining a test-set accuracy of 50.2. Next, we restrict the views in the MVN to be unaware of each other. That is, we replace Equation EQREF7 with INLINEFORM0 , which removes all horizontal links in Figure FIGREF1 . This drops performance to 49.0. Finally, we experiment with a variant of MVN, where each view is only connected to the most recent previous view, replacing Equation EQREF7 with INLINEFORM1 , leading to a version where the parameter count grows linearly in the number of views. This drops the test-set performance to 50.5. These experiments suggest that enabling the views to build upon each other is crucial for achieving the best performance." ], [ "The AG corpus BIBREF3 , BIBREF0 contains categorized news articles from more than 2,000 news outlets on the web. The task has four classes, and for each class there are 30,000 training documents and 1,900 test documents. A random sample of the training set was used for hyper-parameter tuning. The training and testing settings of this task are exactly the same as those presented for the Stanford Sentiment Treebank task in Section SECREF10 , except that the mini-batch size is reduced to 23, and each view has a dimension of 100.", "The test errors obtained by various methods are presented in Table TABREF16 . These results show that the bag-of-words MVN outperforms the state-of-the-art accuracy obtained by the non-neural INLINEFORM0 -gram TFIDF approach BIBREF3 , as well as several very deep CNNs BIBREF0 . Accuracy was further improved when the MVN was augmented with 4 convolutional features.", "In Figure FIGREF17 , we show how accuracy and loss evolve on the validation set during MVN training. These curves show that training is quite stable. The MVN achieves its best results in just a few thousand iterations." ], [ "We have presented a novel multi-view neural network for text classification, which creates multiple views of the input text, each represented as a weighted sum of a base set of feature vectors. These views work together to produce a discriminative feature representation for text classification. Unlike many neural approaches to classification, our architecture emphasizes network width in addition to depth, enhancing gradient flow during training. We have used the multi-view network architecture to establish new state-of-the-art results on two benchmark text classification tasks. In the future, we wish to better understand the benefits of generating multiple views, explore new sources of base features, and apply this technique to other NLP problems such as translation or tagging." ] ], "section_name": [ "Introduction", "Multi-View Networks for Text", "Multiple Attentions for Selection", "Aggregating Selections into Views", "Classification with Views", "Beyond Bags of Words", "Stanford Sentiment Treebank", "AG's English News Categorization", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "d19713335548014b47fb0ad5fb6ef1211aa18b21" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from (Lei et al., 2015)." ], "extractive_spans": [], "free_form_answer": "51.5", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from (Lei et al., 2015)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "41b2c09048e1abf4657a4aa80587b410d18a4bda" ], "answer": [ { "evidence": [ "The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 . The results indicate that the bag-of-words MVN outperforms most methods, but obtains lower accuracy than the state-of-the-art results achieved by the tree-LSTM BIBREF21 , BIBREF22 and the high-order CNN BIBREF16 . However, when augmented with 4 convolutional features as described in Section SECREF9 , the MVN strategy surpasses both of these, establishing a new state-of-the-art on this benchmark." ], "extractive_spans": [], "free_form_answer": "High-order CNN, Tree-LSTM, DRNN, DCNN, CNN-MC, NBoW and SVM ", "highlighted_evidence": [ "The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "eac930578b2cd4b3832b995ffb3fdc030e0578dc" ], "answer": [ { "evidence": [ "Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets." ], "extractive_spans": [], "free_form_answer": " They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task.", "highlighted_evidence": [ "d ", "Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets.", " Stanford Sentiment Treebank", " AG English news corpus " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "what state of the accuracy did they obtain?", "what models did they compare to?", "which benchmark tasks did they experiment on?" ], "question_id": [ "bde6fa2057fa21b38a91eeb2bb6a3ae7fb3a2c62", "a381ba83a08148ce0324b48b8ff35128e66f580a", "edb068df4ffbd73b379590762125990fcd317862" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: A MVN architecture with four views.", "Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from (Lei et al., 2015).", "Figure 2: Accuracies obtained by varying the number of views.", "Table 3: Error rates on the AG News test set. All results except for the MVN are drawn from (Conneau et al., 2016)", "Figure 3: Class-specific F-measures obtained by Naı̈ve Bayes classifiers built over different views.", "Table 2: Ablation experiments on the Stanford Sentiment Treebank test set", "Figure 4: Accuracies and cost on the validation set during training on the AG News data set." ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "3-Figure2-1.png", "4-Table3-1.png", "4-Figure3-1.png", "4-Table2-1.png", "5-Figure4-1.png" ] }
[ "what state of the accuracy did they obtain?", "what models did they compare to?", "which benchmark tasks did they experiment on?" ]
[ [ "1704.05907-3-Table1-1.png" ], [ "1704.05907-Stanford Sentiment Treebank-2" ], [ "1704.05907-Introduction-3" ] ]
[ "51.5", "High-order CNN, Tree-LSTM, DRNN, DCNN, CNN-MC, NBoW and SVM ", " They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task." ]
506
2001.08051
TLT-school: a Corpus of Non Native Children Speech
This paper describes "TLT-school" a corpus of speech utterances collected in schools of northern Italy for assessing the performance of students learning both English and German. The corpus was recorded in the years 2017 and 2018 from students aged between nine and sixteen years, attending primary, middle and high school. All utterances have been scored, in terms of some predefined proficiency indicators, by human experts. In addition, most of utterances recorded in 2017 have been manually transcribed carefully. Guidelines and procedures used for manual transcriptions of utterances will be described in detail, as well as results achieved by means of an automatic speech recognition system developed by us. Part of the corpus is going to be freely distributed to scientific community particularly interested both in non-native speech recognition and automatic assessment of second language proficiency.
{ "paragraphs": [ [ "We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named \"Trentino Language Testing\" in schools (TLT-school), that will be described in the following.", "All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1.", "The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators.", "The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2.", "The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community.", "From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set.", "Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area.", "Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5", "Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2.", "In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks.", "A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets.", "Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children.", "By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks." ], [ "In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored.", "Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.", "The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible." ], [ "The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German).", "A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question." ], [ "Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces.", "It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...).", "To do this, we have applied some text processing, that in sequence:", "$\\bullet $ removes strange characters;", "$\\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.)", "$\\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed)", "$\\bullet $ identifies the language of each word, choosing among Italian, English, German;", "$\\bullet $ corrects common typing errors (e.g. ai em becomes i am)", "$\\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$.", "Table reports some samples of written answers." ], [ "Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand." ], [ "In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules:", "only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”,", "presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”,", "badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech;", "speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”.", "Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block.", "We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions.", "After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task.", "In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab.", "Speakers were assigned either to training or evaluation sets, with proportions of $\\frac{2}{3}$ and $\\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded." ], [ "From the above description it appears that the corpus can be effectively used in many research directions." ], [ "The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation).", "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29.", "As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance.", "We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc." ], [ "The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets.", "Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus." ], [ "The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts.", "The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students.", "Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus." ], [ "By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words.", "A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors." ], [ "The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to:", "remove from the data responses with personal or inadequate content (e.g. bad language);", "normalise the written responses (e.g. upper/lower case, punctuation, evident typos);", "normalise and verify the consistency of the transcription of spoken responses;", "check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4).", "In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.)." ], [ "We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics." ], [ "This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores." ] ], "section_name": [ "Introduction", "Data Acquisition", "Data Acquisition ::: Prompts", "Data Acquisition ::: Written Data", "Data Acquisition ::: Spoken Data", "Manual Transcriptions", "Usage of the Data", "Usage of the Data ::: ASR-related Challenges", "Usage of the Data ::: Data Annotation", "Usage of the Data ::: Proficiency Assessment of L2 Learners", "Usage of the Data ::: Modelling Pronunciation", "Distribution of the Corpus", "Conclusions and Future Works", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "abd7e0e7e34db65f0bdc4df7633ae463ec8c8752" ], "answer": [ { "evidence": [ "It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...)." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...)." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "fb2e157f2e83e818c3360f4e02d350db2d310ab8" ], "answer": [ { "evidence": [ "Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.", "The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible.", "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences." ], "extractive_spans": [], "free_form_answer": "They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert.", "highlighted_evidence": [ "Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.", "The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table ", "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "c7c5dbe72e639593bcda8f67d8ab1cf5950eddea" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.", "Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively." ], "extractive_spans": [], "free_form_answer": "6 indicators:\n- lexical richness\n- pronunciation and fluency\n- syntactical correctness\n- fulfillment of delivery\n- coherence and cohesion\n- communicative, descriptive, narrative skills", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.", " Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "5c84ebf89be01243bd3a2d5423a3d37197e61db3" ], "answer": [ { "evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ], "extractive_spans": [], "free_form_answer": "Accuracy not available: WER results are reported 42.6 German, 35.9 English", "highlighted_evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "ef163985c44c5f2a5cadf594dffda9c99f26fdf7" ], "answer": [ { "evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ], "extractive_spans": [], "free_form_answer": "Speech recognition system is evaluated using WER metric.", "highlighted_evidence": [ "Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages.", "FLOAT SELECTED: Table 8: WER results on 2017 spoken test sets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "43bb717352cee3e1b1a7c6e54274a8e842e0a34b" ], "answer": [ { "evidence": [ "Speakers were assigned either to training or evaluation sets, with proportions of $\\frac{2}{3}$ and $\\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded.", "FLOAT SELECTED: Table 7: Statistics from the spoken data sets (2017) used for ASR." ], "extractive_spans": [], "free_form_answer": "Total number of transcribed utterances including Train and Test for both Eng and Ger language is 5562 (2188 cleaned)", "highlighted_evidence": [ "Speakers were assigned either to training or evaluation sets, with proportions of $\\frac{2}{3}$ and $\\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded.", "FLOAT SELECTED: Table 7: Statistics from the spoken data sets (2017) used for ASR." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "98b883523ea10066572958648cdf5d90c2bce39b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Spoken data collected during different evaluation campaigns. Column “#Q” indicates the total number of different (written) questions presented to the pupils.", "Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand." ], "extractive_spans": [], "free_form_answer": "Total number of utterances available is: 70607 (37344 ENG + 33263 GER)", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Spoken data collected during different evaluation campaigns. Column “#Q” indicates the total number of different (written) questions presented to the pupils.", "Table reports some statistics extracted from the acquired spoken data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "two", "two", "two", "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "question": [ "Are any of the utterances ungrammatical?", "How is the proficiency score calculated?", "What proficiency indicators are used to the score the utterances?", "What accuracy is achieved by the speech recognition system?", "How is the speech recognition system evaluated?", "How many of the utterances are transcribed?", "How many utterances are in the corpus?" ], "question_id": [ "2159062595f24ec29826d517429e1b809ba068b3", "9ebb2adf92a0f8db99efddcade02a20a219ca7d9", "973f6284664675654cc9881745880a0e88f3280e", "0a3a8d1b0cbac559f7de845d845ebbfefb91135e", "ec2b8c43f14227cf74f9b49573cceb137dd336e7", "5e5460ea955d8bce89526647dd7c4f19b173ab34", "d7d611f622552142723e064f330d071f985e805c" ], "question_writer": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ], "search_query": [ "part of speech", "part of speech", "part of speech", "part of speech", "part of speech", "part of speech", "part of speech" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Evaluation of L2 linguistic competences in Trentino: level, grade, age and number of pupils participating in the evaluation campaigns. Most of the pupils did both the English and the German tests.", "Table 2: Written data collected during different evaluation campaigns. Column “#Q” indicates the total number of different (written) questions presented to the pupils.", "Table 3: Spoken data collected during different evaluation campaigns. Column “#Q” indicates the total number of different (written) questions presented to the pupils.", "Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.", "Table 5: Samples of written answers to English questions. On each line the CEFR proficiency level, the question and the answer are reported. Other information (session and question id, total and individual scores, school/class/student anonymous id) is also available but not included below.", "Table 6: Inter-annotator agreement between pairs of students, in terms of words. Students transcribed English utterances first and German ones later.", "Table 7: Statistics from the spoken data sets (2017) used for ASR.", "Table 8: WER results on 2017 spoken test sets.", "Table 9: Words suitable for pronunciation analysis. Data come from the 2017 manually transcribed data. Numbers indicate the number of occurrences, divided into test and training, with good and bad pronounciations." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Table3-1.png", "3-Table4-1.png", "4-Table5-1.png", "4-Table6-1.png", "4-Table7-1.png", "5-Table8-1.png", "6-Table9-1.png" ] }
[ "How is the proficiency score calculated?", "What proficiency indicators are used to the score the utterances?", "What accuracy is achieved by the speech recognition system?", "How is the speech recognition system evaluated?", "How many of the utterances are transcribed?", "How many utterances are in the corpus?" ]
[ [ "2001.08051-Data Acquisition-1", "2001.08051-3-Table4-1.png", "2001.08051-Data Acquisition-2" ], [ "2001.08051-Data Acquisition-1", "2001.08051-3-Table4-1.png" ], [ "2001.08051-Usage of the Data ::: ASR-related Challenges-1", "2001.08051-5-Table8-1.png" ], [ "2001.08051-Usage of the Data ::: ASR-related Challenges-1", "2001.08051-5-Table8-1.png" ], [ "2001.08051-Manual Transcriptions-9", "2001.08051-4-Table7-1.png" ], [ "2001.08051-Data Acquisition ::: Spoken Data-0", "2001.08051-3-Table3-1.png" ] ]
[ "They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert.", "6 indicators:\n- lexical richness\n- pronunciation and fluency\n- syntactical correctness\n- fulfillment of delivery\n- coherence and cohesion\n- communicative, descriptive, narrative skills", "Accuracy not available: WER results are reported 42.6 German, 35.9 English", "Speech recognition system is evaluated using WER metric.", "Total number of transcribed utterances including Train and Test for both Eng and Ger language is 5562 (2188 cleaned)", "Total number of utterances available is: 70607 (37344 ENG + 33263 GER)" ]
513
1611.03382
Efficient Summarization with Read-Again and Copy Mechanism
Encoder-decoder models have been widely used to solve sequence to sequence prediction tasks. However current approaches suffer from two shortcomings. First, the encoders compute a representation of each word taking into account only the history of the words it has read so far, yielding suboptimal representations. Second, current decoders utilize large vocabularies in order to minimize the problem of unknown words, resulting in slow decoding times. In this paper we address both shortcomings. Towards this goal, we first introduce a simple mechanism that first reads the input sequence before committing to a representation of each word. Furthermore, we propose a simple copy mechanism that is able to exploit very small vocabularies and handle out-of-vocabulary words. We demonstrate the effectiveness of our approach on the Gigaword dataset and DUC competition outperforming the state-of-the-art.
{ "paragraphs": [ [ "Encoder-decoder models have been widely used in sequence to sequence tasks such as machine translation ( BIBREF0 , BIBREF1 ). They consist of an encoder which represents the whole input sequence with a single feature vector. The decoder then takes this representation and generates the desired output sequence. The most successful models are LSTM and GRU as they are much easier to train than vanilla RNNs.", "In this paper we are interested in summarization where the input sequence is a sentence/paragraph and the output is a summary of the text. Several encoding-decoding approaches have been proposed ( BIBREF2 , BIBREF3 , BIBREF4 ). Despite their success, it is commonly believed that the intermediate feature vectors are limited as they are created by only looking at previous words. This is particularly detrimental when dealing with large input sequences. Bi-directorial RNNs ( BIBREF5 , BIBREF6 ) try to address this problem by computing two different representations resulting of reading the input sequence left-to-right and right-to-left. The final vectors are computed by concatenating the two representations. However, the word representations are computed with limited scope.", "The decoder employed in all these methods outputs at each time step a distribution over a fixed vocabulary. In practice, this introduces problems with rare words (e.g., proper nouns) which are out of vocabulary. To alleviate this problem, one could potentially increase the size of the decoder vocabulary, but decoding becomes computationally much harder, as one has to compute the soft-max over all possible words. BIBREF7 , BIBREF8 and BIBREF9 proposed to use a copy mechanism that dynamically copy the words from the input sequence while decoding. However, they lack the ability to extract proper embeddings of out-of-vocabulary words from the input context. BIBREF6 proposed to use an attention mechanism to emphasize specific parts of the input sentence when generating each word. However the encoder problem still remains in this approach.", "In this work, we propose two simple mechanisms to deal with both encoder and decoder problems. We borrowed intuition from human readers which read the text multiple times before generating summaries. We thus propose a `Read-Again' model that first reads the input sequence before committing to a representation of each word. The first read representation then biases the second read representation and thus allows the intermediate hidden vectors to capture the meaning appropriate for the input text. We show that this idea can be applied to both LSTM and GRU models. Our second contribution is a copy mechanism which allows us to use much smaller decoder vocabulary sizes resulting in much faster decoding. Our copy mechanism also allows us to construct a better representation of out-of-vocabulary words. We demonstrate the effectiveness of our approach in the challenging Gigaword dataset and DUC competition showing state-of-the-art performance." ], [ "In the past few years, there has been a lot of work on extractive summarization, where a summary is created by composing words or sentences from the source text. Notable examples are BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 and BIBREF14 . As a consequence of their extractive nature the summary is restricted to words (sentences) in the source text.", "Abstractive summarization, on the contrary, aims at generating consistent summaries based on understanding the input text. Although there has been much less work on abstractive methods, they can in principle produce much richer summaries. Abstractive summarization is standardized by the DUC2003 and DUC2004 competitions ( BIBREF15 ). Some of the prominent approaches on this task includes BIBREF16 , BIBREF17 , BIBREF18 and BIBREF19 . Among them, the TOPIARY system ( BIBREF17 ) performs the best in the competitions amongst non neural net based methods.", "Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task. BIBREF3 build a large dataset for Chinese text summarization and propose to feed all hidden states from the encoder into the decoder. More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition. However, the encoders exploited in these methods lack the ability to encode each word condition on the whole text, as an RNN encodes a word into a hidden vector by taking into account only the words up to that time step. In contrast, in this work we propose a `Read-Again' encoder-decoder architecture, which enables the encoder to understand each input word after reading the whole sentence. Our encoder first reads the text, and the results from the first read help represent the text in the second pass over the source text. Our second contribution is a simple copy mechanism that allows us to significantly reduce the decoder vocabulary size resulting in much faster inference times. Furthermore our copy mechanism allows us to handle out-of-vocabulary words in a principled manner. Finally our experiments show state-of-the-art performance on the DUC competition." ], [ "Our work is also closely related to recent work on neural machine translation, where neural encoder-decoder models have shown promising results ( BIBREF21 , BIBREF0 , BIBREF1 ). BIBREF6 further developed an attention mechanism in the decoder in order to pay attention to a specific part of the input at every generating time-step. Our approach also exploits an attention mechanism during decoding." ], [ "Dealing with Out-Of-Vocabulary words (OOVs) is an important issue in sequence to sequence approaches as we cannot enumerate all possible words and learn their embeddings since they might not be part of our training set. BIBREF22 address this issue by annotating words on the source, and aligning OOVs in the target with those source words. Recently, BIBREF23 propose Pointer Networks, which calculate a probability distribution over the input sequence instead of predicting a token from a pre-defined dictionary. BIBREF24 develop a neural-based extractive summarization model, which predicts the targets from the input sequences. BIBREF7 , BIBREF8 add a hard gate to allow the model to decide wether to generate a target word from the fixed-size dictionary or from the input sequence. BIBREF9 use a softmax operation instead of the hard gating. This softmax pointer mechanism is similar to our decoder. However, our decoder can also extract different OOVs' embedding from the input text instead of using a single INLINEFORM0 UNK INLINEFORM1 embedding to represent all OOVs. This further enhances the model's ability to handle OOVs." ], [ "Text summarization can be formulated as a sequence to sequence prediction task, where the input is a longer text and the output is a summary of that text. In this paper we develop an encoder-decoder approach to summarization. The encoder is used to represent the input text with a set of continuous vectors, and the decoder is used to generate a summary word by word.", "In the following, we first introduce our `Read-Again' model for encoding sentences. The idea behind our approach is very intuitive and is inspired by how humans do this task. When we create summaries, we first read the text and then we do a second read where we pay special attention to the words that are relevant to generate the summary. Our `Read-Again' model implements this idea by reading the input text twice and using the information acquired from the first read to bias the second read. This idea can be seamlessly plugged into LSTM and GRU models. Our second contribution is a copy mechanism used in the decoder. It allows us to reduce the decoder vocabulary size dramatically and can be used to extract a better embedding for OOVs. fig:model:overall gives an overview of our model." ], [ "We first review the typical encoder used in machine translation (e.g., BIBREF1 , BIBREF6 ). Let INLINEFORM0 be the input sequence of words. An encoder sequentially reads each word and creates the hidden representation INLINEFORM1 by exploting a recurrent neural network (RNN) DISPLAYFORM0 ", "where INLINEFORM0 is the word embedding of INLINEFORM1 . The hidden vectors INLINEFORM2 are then treated as the feature representations for the whole input sentence and can be used by another RNN to decode and generate a target sentence. Although RNNs have been shown to be useful in modeling sequences, one of the major drawback is that INLINEFORM3 depends only on past information i.e., INLINEFORM4 . However, it is hard (even for humans) to have a proper representation of a word without reading the whole input sentence. Following this intuition, we propose our `Read-Again' model where the encoder reads the input sentence twice. In particular, the first read is used to bias the second more attentive read. We apply this idea to two popular RNN architectures, i.e. GRU and LSTM, resulting in better encodings of the input text.", "Note that although other alternatives, such as bidirectional RNN exist, the hidden states from the forward RNN lack direct interactions with the backward RNN, and thus forward/backward hidden states still cannot utilize the whole sequence. Besides, although we only use our model in a uni-directional manner, it can also be easily adapted to the bidirectional case. We now describe the two variants of our model.", "We read the input sentence INLINEFORM0 for the first-time using a standard GRU DISPLAYFORM0 ", "where the function INLINEFORM0 is defined as, DISPLAYFORM0 ", " It consists of two gatings INLINEFORM0 , controlling whether the current hidden state INLINEFORM1 should be directly copied from INLINEFORM2 or should pass through a more complex path INLINEFORM3 .", "Given the sentence feature vector INLINEFORM0 , we then compute an importance weight vector INLINEFORM1 of each word for the second reading. We put the importance weight INLINEFORM2 on the skip-connections as shown in fig:model:gru to bias the two information flows: If the current word INLINEFORM3 has a very small weight INLINEFORM4 , then the second read hidden state INLINEFORM5 will mostly take the information directly from the previous state INLINEFORM6 , ignoring the influence of the current word. If INLINEFORM7 is close to 1 then it will be similar to a standard GRU, which is only influenced from the current word. Thus the second reading has the following update rule DISPLAYFORM0 ", "where INLINEFORM0 means element-wise product. We compute the importance weights by attending INLINEFORM1 with INLINEFORM2 as follows DISPLAYFORM0 ", "where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are learnable parameters. Note that INLINEFORM3 is a vector representing the importance of each dimension in the word embedding. Empirically, we find that using a vector is better than a single value. We hypothesize that this is because different dimensions represent different semantic meanings, and a single value lacks the ability to model the variances among these dimensions.", "Combining this with the standard GRU update rule INLINEFORM0 ", "we can simplify the updating rule Eq. ( EQREF15 ) to get DISPLAYFORM0 ", "This equations shows that our `read-again' model on GRU is equivalent to replace the GRU cell with a more general gating mechanism that also depends on the feature representation of the whole sentence computed from the first reading pass. We argue that adding this global information could help direct the information flow for the forward pass resulting in a better encoder.", "We now apply the `Read-Again' idea to the LSTM architecture as shown in fig:model:lstm. Our first reading is performed by an INLINEFORM0 defined as DISPLAYFORM0 ", "Different from the GRU architecture, LSTM calculates the hidden state by applying a non-linear activation function to the cell state INLINEFORM0 , instead of a linear combination of two paths used in the GRU. Thus for our second read, instead of using skip-connections, we make the gating functions explicitly depend on the whole sentence vector computed from the first reading pass. We argue that this helps the encoding of the second reading INLINEFORM1 , as all gating and updating increments are also conditioned on the whole sequence feature vector INLINEFORM2 , INLINEFORM3 . Thus DISPLAYFORM0 ", "In this section we extend our `Read-Again' model to the case where the input sequence has more than one sentence. Towards this goal, we propose to use a hierarchical representation, where each sentence has its own feature vector from the first reading pass. We then combine them into a single vector to bias the second reading pass. We illustrate this in the context of two input sentences, but it is easy to generalize to more sentences. Let INLINEFORM0 and INLINEFORM1 be the two input sentences. The first RNN reads these two sentences independently to get two sentence feature vectors INLINEFORM2 and INLINEFORM3 respectively.", "Here we investigate two different ways to handle multiple sentences. Our first option is to simply concatenate the two feature vectors to bias our second reading pass: DISPLAYFORM0 ", " where INLINEFORM0 and INLINEFORM1 are initial zero vectors. Feeding INLINEFORM2 into the second RNN provides more global information explicitly and helps acquire long term dependencies.", "The second option we explored is shown in fig:modelhierarchy. In particular, we use a non-linear transformation to get a single feature vector INLINEFORM0 from both sentence feature vectors: DISPLAYFORM0 ", " The second reading pass is then DISPLAYFORM0 ", " Note that this is more easily scalable to more sentences. In our experiments both approaches perform similarly." ], [ "In this paper we argue that only a small number of common words are needed for generating a summary in addition to the words that are present in the source text. We can consider this as a hybrid approach which combines extractive and abstractive summarization. This has two benefits: first it allow us to use a very small vocabulary size, speeding up inference. Furthermore, we can create summaries which contain OOVs if they are present in the source text.", "Our decoder reads the vector representations of the input text using an attention mechanism, and generates the target summary word by word. We use an LSTM as our decoder, with a fixed-size vocabulary dictionary INLINEFORM0 and learnable word embeddings INLINEFORM1 . At time-step INLINEFORM2 the LSTM generates a summary word INLINEFORM3 by first computing the current hidden state INLINEFORM4 from the previous hidden state INLINEFORM5 , previous summary word INLINEFORM6 and current context vector INLINEFORM7 DISPLAYFORM0 ", "where the context vector INLINEFORM0 is computed with an attention mechanism on the encoder hidden states: DISPLAYFORM0 ", "The attention score INLINEFORM0 at time-step INLINEFORM1 on the INLINEFORM2 -th word is computed via a soft-max over INLINEFORM3 , where DISPLAYFORM0 ", " with INLINEFORM0 , INLINEFORM1 , INLINEFORM2 learnable parameters.", "A typical way to treat OOVs is to encode them with a single shared embedding. However, different OOVs can have very different meanings, and thus using a single embedding for all OOVs will confuse the model. This is particularly detrimental when using small vocabulary sizes. Here we address this issue by deriving the representations of OOVs from their corresponding context in the input text. Towards this goal, we change the update rule of INLINEFORM0 . In particular, if INLINEFORM1 belongs to a word that is in our decoder vocabulary we take its representation from the word embedding, otherwise if it appears in the input sentence as INLINEFORM2 we use DISPLAYFORM0 ", " where INLINEFORM0 and INLINEFORM1 are learnable parameters. Since INLINEFORM2 encodes useful context information of the source word INLINEFORM3 , INLINEFORM4 can be interpreted as the semantics of this word extracted from the input sentence. Furthermore, if INLINEFORM5 does not appear in the input text, nor in INLINEFORM6 , then we represent INLINEFORM7 using the INLINEFORM8 UNK INLINEFORM9 embedding.", "Given the current decoder's hidden state INLINEFORM0 , we can generate the target summary word INLINEFORM1 . As shown in fig:model:decoder, at each time step during decoding, the decoder outputs a distribution over generating words from INLINEFORM2 , as well as over copying a specific word INLINEFORM3 from the source sentence." ], [ "We jointly learn our encoder and decoder by maximizing the likelihood of decoding the correct word at each time step. We refer the reader to the experimental evaluation for more details." ], [ "In this section, we show results of abstractive summarization on Gigaword ( BIBREF25 , BIBREF26 ) and DUC2004 ( BIBREF15 ) datasets. Our model can learn a meaningful re-reading weight distribution for each word in the input text, putting more emphasis on important verb and nous, while ignoring common words such as prepositions. As for the decoder, we demonstrate that our copy mechanism can successfully reduce the typical vocabulary size by a factor 5 while achieving much better performance than the state-of-the-art, and by a factor of 30 while maintaining the same level of performance. In addition, we provide an analysis and examples of which words are copied during decoding." ], [ "Results on Gigaword: We compare the performances of different architectures and report ROUGE scores in Tab. TABREF32 . Our baselines include the ABS model of BIBREF2 with its proposed vocabulary size as well as an attention encoder-decoder model with uni-directional GRU encoder. We allow the decoder to generate variable length summaries. As shown in Tab. TABREF32 our Read-Again models outperform the baselines on all ROUGE scores, when using both 15K and 69K sized vocabularies. We also observe that adding the copy mechanism further helps to improve performance: Even though the decoder vocabulary size of our approach with copy (15K) is much smaller than ABS (69K) and GRU (69K), it achieves a higher ROUGE score. Besides, our Multiple-Sentences model achieves the best performance.", "Evaluation on DUC2004: DUC 2004 ( BIBREF15 ) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is paired with 4 different human-generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to BIBREF2 , we train our neural model on the Gigaword training set, and show the models' performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k.", "Importance Weight Visualization: As we described in the section before, INLINEFORM0 is a high-dimension vector representing the importance of each word INLINEFORM1 . While the importance of a word is different over each dimension, by averaging we can still look at general trends of which word is more relevant. fig:weightvisual depicts sample sentences with the importance weight INLINEFORM2 over input words. Words such as the, a, 's, have small INLINEFORM3 , while words such as aeronautics, resettled, impediments, which carry more information have higher values. This shows that our read-again technique indeed extracts useful information from the first reading to help bias the second reading results." ], [ "Table TABREF42 shows the effect on our model of decreasing the decoder vocabulary size. We can see that when using the copy mechanism, we are able to reduce the decoder vocabulary size from 69K to 2K, with only 2-3 points drop on ROUGE score. This contrasts the models that do not use the copy mechanism. This is possibly due to two reasons. First, when faced with OOVs during decoding time, our model can extract their meanings from the input text. Second, equipped with a copy mechanism, our model can generate OOVs as summary words, maintaining its expressive ability even with a small decoder vocabulary size. Tab. TABREF43 shows the decoding time as a function of vocabulary size. As computing the soft-max is usually the bottleneck for decoding, reducing vocabulary size dramatically reduces the decoding time from 0.38 second per sentence to 0.08 second.", "Tab. TABREF44 provides some examples of visualization of the copy mechanism. Note that we are able to copy key words from source sentences to improve the summary. From these examples we can see that our model is able to copy different types of rare words, such as special entities' names in case 1 and 2, rare nouns in case 3 and 4, adjectives in case 5 and 6, and even rare verbs in the last example. Note that in the third example, when the copy model's decoder uses the embedding of headmaster as its first input, which is extracted from the source sentence, it generates the same following sentence as the no-copy model. This probably means that the extracted embedding of headmaster is closely related to the learned embedding of teacher." ], [ "In this paper we have proposed two simple mechanisms to alleviate the problems of current encoder-decoder models. Our first contribution is a `Read-Again' model which does not form a representation of the input word until the whole sentence is read. Our second contribution is a copy mechanism that can handle out-of-vocabulary words in a principled manner allowing us to reduce the decoder vocabulary size and significantly speed up inference. We have demonstrated the effectiveness of our approach in the context of summarization and shown state-of-the-art performance. In the future, we plan to tackle summarization problems with large input text. We also plan to exploit our findings in other tasks such as machine translation." ] ], "section_name": [ "Introduction", "Summarization", "Neural Machine Translation", "Out-Of-Vocabulary and Copy Mechanism", "The read again model", "Encoder", "Decoder with copy mechanism", "Learning", "Experimental Evalaluation", "Quantitative Evaluation", "Decoder Vocabulary Size", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "58cd181cc6341f7e303e2d75cbda1c0bea6e2eec" ], "answer": [ { "evidence": [ "Evaluation on DUC2004: DUC 2004 ( BIBREF15 ) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is paired with 4 different human-generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to BIBREF2 , we train our neural model on the Gigaword training set, and show the models' performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k.", "FLOAT SELECTED: Table 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model." ], "extractive_spans": [], "free_form_answer": "w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%", "highlighted_evidence": [ "As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2.", "FLOAT SELECTED: Table 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "43f2673b16701e2b8de8c2bd31cb3fbd81a71cf5" ], "answer": [ { "evidence": [ "Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task. BIBREF3 build a large dataset for Chinese text summarization and propose to feed all hidden states from the encoder into the decoder. More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition. However, the encoders exploited in these methods lack the ability to encode each word condition on the whole text, as an RNN encodes a word into a hidden vector by taking into account only the words up to that time step. In contrast, in this work we propose a `Read-Again' encoder-decoder architecture, which enables the encoder to understand each input word after reading the whole sentence. Our encoder first reads the text, and the results from the first read help represent the text in the second pass over the source text. Our second contribution is a simple copy mechanism that allows us to significantly reduce the decoder vocabulary size resulting in much faster inference times. Furthermore our copy mechanism allows us to handle out-of-vocabulary words in a principled manner. Finally our experiments show state-of-the-art performance on the DUC competition." ], "extractive_spans": [], "free_form_answer": "neural attention model with a convolutional encoder with an RNN decoder and RNN encoder-decoder", "highlighted_evidence": [ "Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task. ", "More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "By how much does their model outperform both the state-of-the-art systems?", "What is the state-of-the art?" ], "question_id": [ "9555aa8de322396a16a07a5423e6a79dcd76816a", "81e8d42dad08a58fe27eea838f060ec8f314465e" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 1: Read-Again Summarization Model", "Figure 2: Read-Again Model", "Figure 3: Hierachical Read-Again", "Table 1: Different Read-Again Model. Ours denotes Read-Again models. C denotes copy mechanism. Ours-Opt-1 and Ours-Opt-2 are the models described in section 3.1.3. Size denotes the size of decoder vocabulary in a model.", "Table 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model.", "Figure 4: Weight Visualization. Black indicates high weight", "Table 3: ROUGE Evaluation for Models with Different Decoder Size and 110k Encoder Size. Ours denotes Read-Again. C denotes copy mechanism.", "Table 4: ROUGE Evaluation for Models with Different Encoder Size and 15k Decoder Size. Ours denotes Read-Again. C denotes copy mechanism.", "Table 5: Decoding Time (s) per Sentence of Models with Different Decoder Size", "Table 6: Visualization of Copy Mechanism" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "7-Table1-1.png", "8-Table2-1.png", "8-Figure4-1.png", "9-Table3-1.png", "9-Table4-1.png", "9-Table5-1.png", "10-Table6-1.png" ] }
[ "By how much does their model outperform both the state-of-the-art systems?", "What is the state-of-the art?" ]
[ [ "1611.03382-Quantitative Evaluation-1", "1611.03382-8-Table2-1.png" ], [ "1611.03382-Summarization-2" ] ]
[ "w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%", "neural attention model with a convolutional encoder with an RNN decoder and RNN encoder-decoder" ]
514
1909.06937
CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding
Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works. However, most existing models fail to fully utilize co-occurrence relations between slots and intents, which restricts their potential performance. To address this issue, in this paper we propose a novel Collaborative Memory Network (CM-Net) based on the well-designed block, named CM-block. The CM-block firstly captures slot-specific and intent-specific features from memories in a collaborative manner, and then uses these enriched features to enhance local context representations, based on which the sequential information flow leads to more specific (slot and intent) global utterance representations. Through stacking multiple CM-blocks, our CM-Net is able to alternately perform information exchange among specific memories, local contexts and the global utterance, and thus incrementally enriches each other. We evaluate the CM-Net on two standard benchmarks (ATIS and SNIPS) and a self-collected corpus (CAIS). Experimental results show that the CM-Net achieves the state-of-the-art results on the ATIS and SNIPS in most of criteria, and significantly outperforms the baseline models on the CAIS. Additionally, we make the CAIS dataset publicly available for the research community.
{ "paragraphs": [ [ "Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents.", "Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled.", "In this paper, we try to address these issues, and thus propose a novel $\\mathbf {C}$ollaborative $\\mathbf {M}$emory $\\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components:", "Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner.", "Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention.", "", "Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation.", "Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net.", "Our main contributions are as follows:", "We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks.", "Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria.", "We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community." ], [ "In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \\lbrace x_1, x_2, \\cdots , x_N \\rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \\lbrace y_1, y_2, \\cdots , y_N \\rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\\theta } : X \\rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$.", "Typically, the input utterance is firstly encoded into a sequence of distributed representations $\\mathbf {X} = \\lbrace \\mathbf {x}_1, \\mathbf {x}_2, \\cdots , \\mathbf {x}_N\\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\\mathbf {X}$ into context-sensitive representations $\\mathbf {H} = \\lbrace \\mathbf {h}_1, \\mathbf {h}_2, \\cdots , \\mathbf {h}_N\\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags:", "Here $\\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\\cdot )$ is the score function calculated by:", "where $\\mathbf {A}$ is the transition matrix that $\\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15.", "When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score:", "As to the prediction of intent, the word-level hidden states $\\mathbf {H}$ are firstly summarized into a utterance-level representation $\\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.):", "The most probable intent label $\\hat{y}^{int}$ is predicted by softmax normalization over the intent label set:", "Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows:", "where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ ." ], [ "In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer." ], [ "The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen." ], [ "It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings." ], [ "The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively." ], [ "To fully model semantic relations between slots and intents, we build the slot memory $\\mathbf {M^{slot}} $ and intent memory $\\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\\mathbf {h}_t$ as query, and obtain slot feature $\\mathbf {h}_t^{slot}$ and intent feature $\\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following.", "Specifically for the slot feature $\\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\\widetilde{\\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\\mathbf {h}_t$ over the intent memory $\\mathbf {M^{int}}$, and then obtain the final slot feature $\\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\\mathbf {M^{slot}}$ with the intent-enhanced representation $[\\mathbf {h}_t;\\widetilde{\\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows:", "where $ATT(\\cdot )$ is the query function calculated by the weighted sum of all cells $\\mathbf {m}_i^{x}$ in memory $\\mathbf {M}^{x}$ ($x \\in \\lbrace slot, int\\rbrace $) :", "Here $\\mathbf {u}$ and $\\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention\".", "The intent representation $\\mathbf {h}_t^{int}$ is computed by the deliberate attention as well:", "These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\\mathbf {H}_t^{slot}$ and intent features $\\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer." ], [ "Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\\mathbf {H}_t^{slot}$ and intent-specific features $\\mathbf {H}_t^{slot}$ retrieved from memories.", "Specifically, at each input position $t$, we take the local window context $\\mathbf {\\xi }_t$, word embedding $\\mathbf {x}_t$, slot feature $\\mathbf {h}_t^{slot}$ and intent feature $\\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\\mathbf {h_t}$ is updated as follows:", "where $\\mathbf { \\xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\\mathbf {i}_t^l$, $\\mathbf {f}_t^l$, $\\mathbf {o}_t^l$, $\\mathbf {l}_t^l$ and $\\mathbf {r}_t^l$ are gates to control information flows, and $\\mathbf {W}_n^x$ $(x \\in \\lbrace i, o, f, l, r, u\\rbrace , n \\in \\lbrace 1, 2, 3, 4\\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block.", "At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46." ], [ "Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows:", "The output “states\" of the BiLSTMs are taken as “states\" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46." ], [ "After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\\mathbf {H}$ along with the retrieved slot $\\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2:", "For the prediction of intent label, we firstly aggregate the hidden state $\\mathbf {h}_t$ and the retrieved intent representation $\\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling:", "and then take the summarized vector $\\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2." ], [ "We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32." ], [ "The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features." ], [ "SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9." ], [ "We collect utterances from the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field." ], [ "Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted." ], [ "All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material." ], [ "Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net.", "It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48." ], [ "Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments." ], [ "In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43.", "We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int\"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot\"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43).", "In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other." ], [ "We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44.", "Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0)." ], [ "Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS." ], [ "We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages." ], [ "Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach." ], [ "Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents." ], [ "BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments." ], [ "We propose a novel $\\mathbf {C}$ollaborative $\\mathbf {M}$emory $\\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community." ], [ "Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions." ] ], "section_name": [ "Introduction", "Background", "CM-Net ::: Overview", "CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding", "CM-Net ::: Embedding Layers ::: Character-aware Word Embedding", "CM-Net ::: CM-block", "CM-Net ::: CM-block ::: Deliberate Attention", "CM-Net ::: CM-block ::: Local Calculation", "CM-Net ::: CM-block ::: Global Recurrence", "CM-Net ::: Inference Layer", "Experiments ::: Datasets and Metrics", "Experiments ::: Datasets and Metrics ::: ATIS", "Experiments ::: Datasets and Metrics ::: SNIPS", "Experiments ::: Datasets and Metrics ::: CAIS", "Experiments ::: Datasets and Metrics ::: Metrics", "Experiments ::: Implementation Details", "Experiments ::: Main Results", "Analysis", "Analysis ::: Whether Memories Promote Each Other?", "Analysis ::: Ablation Experiments", "Analysis ::: Effects of Pre-trained Language Models", "Analysis ::: Evaluation on the CAIS", "Related Work ::: Memory Network", "Related Work ::: Interactions between slots and intents", "Related Work ::: Sentence-State LSTM", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "4420a87163b36ed7a76ec7e62953a92d0b55e147" ], "answer": [ { "evidence": [ "We collect utterances from the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field." ], "extractive_spans": [ "speaker systems in the real world" ], "free_form_answer": "", "highlighted_evidence": [ "We collect utterances from the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "c83e4e9a57db3787ab8fcb4a34d82a396e64f809" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 6: Results on our CAIS dataset, where “†” indicates our implementation of the S-LSTM." ], "extractive_spans": [], "free_form_answer": "F1 scores of 86.16 on slot filling and 94.56 on intent detection", "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Results on our CAIS dataset, where “†” indicates our implementation of the S-LSTM." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "bdeeb95944c9ce462886e9937eae4b662dbba3b7" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Dataset statistics." ], "extractive_spans": [], "free_form_answer": "10,001 utterances", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Dataset statistics." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "a3a2a87391e67e50616d49f4c90dfd19e51911c1" ], "answer": [ { "evidence": [ "We collect utterances from the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field." ], "extractive_spans": [ "the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS)" ], "free_form_answer": "", "highlighted_evidence": [ "We collect utterances from the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "7fe70c414b69eaf1b8f915f547a59a302aa695c1" ], "answer": [ { "evidence": [ "We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages." ], "extractive_spans": [ "BiLSTMs + CRF architecture BIBREF36", "sententce-state LSTM BIBREF21" ], "free_form_answer": "", "highlighted_evidence": [ "We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "question": [ "What is the domain of their collected corpus?", "What was the performance on the self-collected corpus?", "What is the size of their dataset?", "What is the source of the CAIS dataset?", "What were the baselines models?" ], "question_id": [ "b4f5bf3b7b37e2f22d13b724ca8fe7d0888e04a2", "fa3312ae4bbed11a5bebd77caf15d651962e0b26", "26c290584c97e22b25035f5458625944db181552", "d71772bfbc27ff1682e552484bc7c71818be50cf", "b6858c505936d981747962eae755a81489f62858" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Figure 1: Statistical association of slot tags (on the left) and intent labels (on the right) in the SNIPS, where colors indicate different intents and thicknesses of lines indicate proportions.", "Table 1: Examples in SNIPS with annotations of intent label for the utterance and slot tags for partial words.", "Figure 2: Overview of our proposed CM-Net. The input utterance is firstly encoded with the Embedding Layer (bottom), and then is transformed by multiple CM-blocks with the assistance of both slot and intent memories (on both sides). Finally we make predictions of slots and the intent in the Inference Layer (top).", "Figure 3: The internal structure of our CM-Block, which is composed of deliberate attention, local calculation and global recurrent respectively.", "Table 2: Dataset statistics.", "Table 3: Results on test sets of the SNIPS and ATIS, where our CM-Net achieves state-of-the-art performances in most cases. “*” indicates that results are retrieved from Slot-Gated (Goo et al., 2018), and “†” indicates our implementation.", "Figure 4: Investigations of the collaborative retrieval approach on slot filling (on the left) and intent detection (on the right), where “no slot2int” indicates removing slow-aware attention for the intent representation, and similarly for “no int2slot” and “neither”.", "Table 4: Ablation experiments on the SNIPS to investigate the impacts of various components, where “- slot memory” indicates removing the slot memory and its interactions with other components correspondingly. Similarly for the other options.", "Table 5: Results on the SNIPS benchmark with the assistance of pre-trained language model, where we establish new state-of-the-art results on the SNIPS.", "Table 6: Results on our CAIS dataset, where “†” indicates our implementation of the S-LSTM." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Table2-1.png", "6-Table3-1.png", "7-Figure4-1.png", "7-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png" ] }
[ "What was the performance on the self-collected corpus?", "What is the size of their dataset?" ]
[ [ "1909.06937-8-Table6-1.png" ], [ "1909.06937-5-Table2-1.png" ] ]
[ "F1 scores of 86.16 on slot filling and 94.56 on intent detection", "10,001 utterances" ]
516
1905.10044
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings. We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging. They often query for complex, non-factoid information, and require difficult entailment-like inference to solve. We also explore the effectiveness of a range of transfer learning baselines. We find that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on our train set. It achieves 80.4% accuracy compared to 90% accuracy of human annotators (and 62% majority-baseline), leaving a significant gap for future work.
{ "paragraphs": [ [ "Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Belarus in freestyle skiing.\" implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. Work completed while interning at Google. Also affiliated with Columbia University, work done at Google.", "To test a model's ability to make these kinds of inferences, previous work in natural language inference (NLI) proposed the task of labeling candidate statements as being entailed or contradicted by a given passage. However, in practice, generating candidate statements that test for complex inferential abilities is challenging. For instance, evidence suggests BIBREF0 , BIBREF1 , BIBREF2 that simply asking human annotators to write candidate statements will result in examples that typically only require surface-level reasoning.", "In this paper we propose an alternative: we test models on their ability to answer naturally occurring yes/no questions. That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking. Figure contains some examples from our dataset. We find such questions often query for non-factoid information, and that human annotators need to apply a wide range of inferential abilities when answering them. As a result, they can be used to construct highly inferential reading comprehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions.", "Yes/No questions do appear as a subset of some existing datasets BIBREF3 , BIBREF4 , BIBREF5 . However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions.", "We follow the data collection method used by Natural Questions (NQ) BIBREF6 to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions). Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer. The task is then to take a question and passage as input, and to return “yes\" or “no\" as output. Figure contains some examples, and Appendix SECREF17 contains additional randomly selected examples.", "Following recent work BIBREF7 , we focus on using transfer learning to establish baselines for our dataset. Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphrasing. Therefore, it is not clear what the best data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre-trained language models such as BERT BIBREF8 or ELMo BIBREF9 . We experiment with state-of-the-art unsupervised approaches, using existing entailment datasets, three methods of leveraging extractive QA data, and using a few other supervised datasets.", "We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results. Notably, we found these approaches are surprisingly complementary and can be combined to achieve a large gain in performance. Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy. In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset. We present our data and code at https://goo.gl/boolq." ], [ "Yes/No questions make up a subset of the reading comprehension datasets CoQA BIBREF3 , QuAC BIBREF4 , and HotPotQA BIBREF5 , and are present in the ShARC BIBREF10 dataset. These datasets were built to challenge models to understand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no questions to test inferential abilities. Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to answer their questions, making it the best candidate to contain naturally occurring questions. However, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% “yes\" answers.", "The MS Marco dataset BIBREF11 , which contains questions with free-form text answers, also includes some yes/no questions. We experiment with heuristically identifying them in Section SECREF4 , but this process can be noisy and the quality of the resulting annotations is unknown. We also found the resulting dataset is class imbalanced, with 80% “yes\" answers.", "Yes/No QA has been used in other contexts, such as the templated bAbI stories BIBREF12 or some Visual QA datasets BIBREF13 , BIBREF14 . We focus on answering yes/no questions using natural language text.", "Question answering for reading comprehension in general has seen a great deal of recent work BIBREF15 , BIBREF16 , and there have been many recent attempts to construct QA datasets that require advanced reasoning abilities BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, these attempts typically involve engineering data to be more difficult by, for example, explicitly prompting users to write multi-step questions BIBREF5 , BIBREF18 , or filtering out easy questions BIBREF19 . This risks resulting in models that do not have obvious end-use applications since they are optimized to perform in an artificial setting. In this paper, we show that yes/no questions have the benefit of being very challenging even when they are gathered from natural sources.", "Natural language inference is also a well studied area of research, particularly on the MultiNLI BIBREF21 and SNLI BIBREF22 datasets. Other sources of entailment data include the PASCAL RTE challenges BIBREF23 , BIBREF24 or SciTail BIBREF25 . We note that, although SciTail, RTE-6 and RTE-7 did not use crowd workers to generate candidate statements, they still use sources (multiple choices questions or document summaries) that were written by humans with knowledge of the premise text. Using naturally occurring yes/no questions ensures even greater independence between the questions and premise text, and ties our dataset to a clear end-task. BoolQ also requires detecting entailment in paragraphs instead of sentence pairs.", "Transfer learning for entailment has been studied in GLUE BIBREF7 and SentEval BIBREF26 . Unsupervised pre-training in general has recently shown excellent results on many datasets, including entailment data BIBREF9 , BIBREF8 , BIBREF27 .", "Converting short-answer or multiple choice questions into entailment examples, as we do when experimenting with transfer learning, has been proposed in several prior works BIBREF28 , BIBREF29 , BIBREF25 . In this paper we found some evidence suggesting that these approaches are less effective than using crowd-sourced entailment examples when it comes to transferring to natural yes/no questions.", "Contemporaneously with our work, BIBREF30 showed that pre-training on supervised tasks could be beneficial even when using pre-trained language models, especially for a textual entailment task. Our work confirms these results for yes/no question answering.", "This work builds upon the Natural Questions (NQ) BIBREF6 , which contains some natural yes/no questions. However, there are too few (about 1% of the corpus) to make yes/no QA a very important aspect of that task. In this paper, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset." ], [ "An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either “yes\" or “no\". We include the article title since it can potentially help resolve ambiguities (e.g., coreferent phrases) in the passage, although none of the models presented in this paper make use of them." ], [ "We gather data using the pipeline from NQ BIBREF6 , but with an additional filtering step to focus on yes/no questions. We summarize the complete pipeline here, but refer to their paper for a more detailed description.", "Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.", "Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.", "Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable\" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes\" or “no\". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.", "Note that, unlike in NQ, we only use questions that were marked as having a yes/no answer, and pair each question with the selected passage instead of the entire document. This helps reduce ambiguity (ex., avoiding cases where the document supplies conflicting answers in different paragraphs), and keeps the input small enough so that existing entailment models can easily be applied to our dataset.", "We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set. “Yes” answers are slightly more common (62.31% in the train set). The queries are typically short (average length 8.9 tokens) with longer passages (average length 108 tokens)." ], [ "In the following section we analyze our dataset to better understand the nature of the questions, the annotation quality, and the kinds of reasoning abilities required to answer them." ], [ "First, in order to assess annotation quality, three of the authors labelled 110 randomly chosen examples. If there was a disagreement, the authors conferred and selected a single answer by mutual agreement. We call the resulting labels “gold-standard\" labels. On the 110 selected examples, the answer annotations reached 90% accuracy compared to the gold-standard labels. Of the cases where the answer annotation differed from the gold-standard, six were ambiguous or debatable cases, and five were errors where the annotator misunderstood the passage. Since the agreement was sufficiently high, we elected to use singly-annotated examples in the training/dev/test sets in order to be able to gather a larger dataset." ], [ "Part of the value of this dataset is that it contains questions that people genuinely want to answer. To explore this further, we manually define a set of topics that questions can be about. An author categorized 200 questions into these topics. The results can be found in the upper half of Table .", "Questions were often about entertainment media (including T.V., movies, and music), along with other popular topics like sports. However, there are still a good portion of questions asking for more general factual knowledge, including ones about historical events or the natural world.", "We also broke the questions into categories based on what kind of information they were requesting, shown in the lower half of Table . Roughly one-sixth of the questions are about whether anything with a particular property exists (Existence), another sixth are about whether a particular event occurred (Event Occurrence), and another sixth ask whether an object is known by a particular name, or belongs to a particular category (Definitional). The questions that do not fall into these three categories were split between requesting facts about a specific entity, or requesting more general factual information.", "We do find a correlation between the nature of the question and the likelihood of a “yes\" answer. However, this correlation is too weak to help outperform the majority baseline because, even if the topic or type is known, it is never best to guess the minority class. We also found that question-only models perform very poorly on this task (see Section SECREF12 ), which helps confirm that the questions do not contain sufficient information to predict the answer on their own." ], [ "Finally, we categorize the kinds of inference required to answer the questions in BoolQ. The definitions and results are shown in Table .", "Less than 40% of the examples can be solved by detecting paraphrases. Instead, many questions require making additional inferences (categories “Factual Reasoning\", “By Example\", and “Other Inference\") to connect what is stated in the passage to the question. There is also a significant class of questions (categories “Implicit\" and “Missing Mention\") that require a subtler kind of inference based on how the passage is written." ], [ "Why do natural yes/no questions require inference so often? We hypothesize that there are several factors. First, we notice factoid questions that ask about simple properties of entities, such as “Was Obama born in 1962?\", are rare. We suspect this is because people will almost always prefer to phrase such questions as short-answer questions (e.g., “When was Obama born?\"). Thus, there is a natural filtering effect where people tend to use yes/no questions exactly when they want more complex kinds of information.", "Second, both the passages and questions rarely include negation. As a result, detecting a “no\" answer typically requires understanding that a positive assertion in the text excludes, or makes unlikely, a positive assertion in the question. This requires reasoning that goes beyond paraphrasing (see the “Other-Inference\" or “Implicit\" examples).", "We also think it was important that annotators only had to answer questions, rather than generate them. For example, imagine trying to construct questions that fall into the categories of “Missing Mention\" or “Implicit\". While possible, it would require a great deal of thought and creativity. On the other hand, detecting when a yes/no question can be answered using these strategies seems much easier and more intuitive. Thus, having annotators answer pre-existing questions opens the door to building datasets that contain more inference and have higher quality labels.", "A surprising result from our work is that the datasets that more closely resemble the format of BoolQ, meaning they contain questions and multi-sentence passages, such as SQuAD 2.0, RACE, or Y/N MS Marco, were not very useful for transfer. The entailment datasets were stronger despite consisting of sentence pairs. This suggests that adapting from sentence-pair input to question/passage input was not a large obstacle to achieving transfer. Preliminary work found attempting to convert the yes/no questions in BoolQ into declarative statements did not improve transfer from MultiNLI, which supports this hypothesis.", "The success of MultiNLI might also be surprising given recent concerns about the generalization abilities of models trained on it BIBREF37 , particularly related to “annotation artifacts\" caused by using crowd workers to write the hypothesis statements BIBREF0 . We have shown that, despite these weaknesses, it can still be an important starting point for models being used on natural data.", "We hypothesize that a key advantage of MultiNLI is that it contains examples of contradictions. The other sources of transfer we consider, including the next-sentence-selection objective in BERT, are closer to providing examples of entailed text vs. neutral/unrelated text. Indeed, we found that our two step transfer procedure only reaches 78.43% dev set accuracy if we remove the contradiction class from MultiNLI, regressing its performance close to the level of BERTL when just using unsupervised pre-training.", "Note that it is possible to pre-train a model on several of the suggested datasets, either in succession or in a multi-task setup. We leave these experiments to future work. Our results also suggest pre-training on MultiNLI would be helpful for other corpora that contain yes/no questions." ], [ "Models on this dataset need to predict an output class given two pieces of input text, which is a well studied paradigm BIBREF7 . We find training models on our train set alone to be relatively ineffective. Our best model reaches 69.6% accuracy, only 8% better than the majority baseline. Therefore, we follow the recent trend in NLP of using transfer learning. In particular, we experiment with pre-training models on related tasks that have larger datasets, and then fine-tuning them on our training data. We list the sources we consider for pre-training below.", "Entailment: We consider two entailment datasets, MultiNLI BIBREF21 and SNLI BIBREF22 . We choose these datasets since they are widely-used and large enough to use for pre-training. We also experiment with ablating classes from MultiNLI. During fine-tuning we use the probability the model assigns to the “entailment\" class as the probability of predicting a “yes\" answer.", "Multiple-Choice QA: We use a multiple choice reading comprehension dataset, RACE BIBREF31 , which contains stories or short essays paired with questions built to test the reader's comprehension of the text. Following what was done in SciTail BIBREF25 , we convert questions and answer-options to statements by either substituting the answer-option for the blanks in fill-in-the-blank questions, or appending a separator token and the answer-option to the question. During training, we have models independently assign a score to each statement, and then apply the softmax operator between all statements per each question to get statement probabilities. We use the negative log probability of the correct statement as a loss function. To fine-tune on BoolQ, we apply the sigmoid operator to the score of the question given its passage to get the probability of a “yes\" answer.", "Extractive QA: We consider several methods of leveraging extractive QA datasets, where the model must answer questions by selecting text from a relevant passage. Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of constructing entailment-like data from extractive QA data.", "First, we use the QNLI task from GLUE BIBREF7 , where the model must determine if a sentence from SQuAD 1.1 BIBREF15 contains the answer to an input question or not. Following previous work BIBREF32 , we also try building entailment-like training data from SQuAD 2.0 BIBREF33 . We concatenate questions with either the correct answer, or with the incorrect “distractor\" answer candidate provided by the dataset, and train the model to classify which is which given the question's supporting text.", "Finally, we also experiment with leveraging the long-answer portion of NQ, where models must select a paragraph containing the answer to a question from a document. Following our method for Multiple-Choice QA, we train a model to assign a score to (question, paragraph) pairs, apply the softmax operator on paragraphs from the same document to get a probability distribution over the paragraphs, and train the model on the negative log probability of selecting an answer-containing paragraph. We only train on questions that were marked as having an answer, and select an answer-containing paragraph and up to 15 randomly chosen non-answer-containing paragraphs for each question. On BoolQ, we compute the probability of a “yes\" answer by applying the sigmoid operator to the score the model gives to the input question and passage.", "Paraphrasing: We use the Quora Question Paraphrasing (QQP) dataset, which consists of pairs of questions labelled as being paraphrases or not. Paraphrasing is related to entailment since we expect, at least in some cases, passages will contain a paraphrase of the question.", "Heuristic Yes/No: We attempt to heuristically construct a corpus of yes/no questions from the MS Marco corpus BIBREF11 . MS Marco has free-form answers paired with snippets of related web documents. We search for answers starting with “yes\" or “no\", and then pair the corresponding questions with snippets marked as being related to the question. We call this task Y/N MS Marco; in total we gather 38k examples, 80% of which are “yes” answers.", "Unsupervised: It is well known that unsupervised pre-training using language-modeling objectives BIBREF9 , BIBREF8 , BIBREF27 , can improve performance on many tasks. We experiment with these methods by using the pre-trained models from ELMo, BERT, and OpenAI's Generative Pre-trained Transformer (OpenAI GPT) (see Section SECREF11 )." ], [ "First, we experiment with using a linear classifier on our task. In general, we found features such as word overlap or TF-IDF statistics were not sufficient to achieve better than the majority-class baseline accuracy (62.17% on the dev set). We did find there was a correlation between the number of times question words occurred in the passage and the answer being “yes\", but the correlation was not strong enough to build an effective classifier. “Yes\" is the most common answer even among questions with zero shared words between the question and passage (with a 51% majority), and more common in other cases." ], [ "For our experiments that do not use unsupervised pre-training (except the use of pre-trained word vectors), we use a standard recurrent model with attention. Our experiments using unsupervised pre-training use the models provided by the authors. In more detail:", "Our Recurrent model follows a standard recurrent plus attention architecture for text-pair classification BIBREF7 . It embeds the premise/hypothesis text using fasttext word vectors BIBREF34 and learned character vectors, applies a shared bidirectional LSTM to both parts, applies co-attention BIBREF35 to share information between the two parts, applies another bi-LSTM to both parts, pools the result, and uses the pooled representation to predict the final class. See Appendix SECREF18 for details.", "Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors.", "Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 .", "Our BERTL model fine-tunes the 24 layer 1024 dimensional transformer from BIBREF8 , which has been trained on next-sentence-selection and masked language modelling on the Book Corpus and Wikipedia.", "We fine-tune the BERTL and the OpenAI GPT models using the optimizers recommended by the authors, but found it important to tune the optimization parameters to achieve the best results. We use a batch size of 24, learning rate of 1e-5, and 5 training epochs for BERT and a learning rate of 6.25e-5, batch size of 6, language model loss of 0.5, and 3 training epochs for OpenAI GPT." ], [ "Following the recommendation of BIBREF0 , we first experiment with models that are only allowed to observe the question or the passage. The pre-trained BERTL model reached 64.48% dev set accuracy using just the question and 66.74% using just the passage. Given that the majority baseline is 62.17%, this suggests there is little signal in the question by itself, but that some language patterns in the passage correlate with the answer. Possibly, passages that present more straightforward factual information (like Wikipedia introduction paragraphs) correlate with “yes\" answers." ], [ "The results of our transfer learning methods are shown in Table . All results are averaged over five runs. For models pre-trained on supervised datasets, both the pre-training and the fine-tuning stages were repeated. For unsupervised pre-training, we use the pre-trained models provided by the authors, but continue to average over five runs of fine-tuning.", "QA Results: We were unable to transfer from RACE or SQuAD 2.0. For RACE, the problem might be domain mismatch. In RACE the passages are stories, and the questions often query for passage-specific information such as the author's intent or the state of a particular entity from the passage, instead of general knowledge.", "We would expect SQuAD 2.0 to be a better match for BoolQ since it is also Wikipedia-based, but its possible detecting the adversarially-constructed distractors used for negative examples does not relate well to yes/no QA.", "We got better results using QNLI, and even better results using NQ. This shows the task of selecting text relevant to a question is partially transferable to yes/no QA, although we are only able to gain a few points over the baseline.", "Entailment Results: The MultiNLI dataset out-performed all other supervised methods by a large margin. Remarkably, this approach is only a few points behind BERT despite using orders of magnitude less training data and a much more light-weight model, showing high-quality pre-training data can help compensate for these deficiencies.", "Our ablation results show that removing the neutral class from MultiNLI hurt transfer slightly, and removing either of the other classes was very harmful, suggesting the neutral examples had limited value. SNLI transferred better than other datasets, but worse than MultiNLI. We suspect this is due to limitations of the photo-caption domain it was constructed from.", "Other Supervised Results: We obtained a small amount of transfer using QQP and Y/N MS Marco. Although Y/N MS Marco is a yes/no QA dataset, its small size and class imbalance likely contributed to its limited effectiveness. The web snippets it uses as passages also present a large domain shift from the Wikipedia passages in BoolQ.", "Unsupervised Results: Following results on other datasets BIBREF7 , we found BERTL to be the most effective unsupervised method, surpassing all other methods of pre-training." ], [ "Our best single-step transfer learning results were from using the pre-trained BERTL model and MultiNLI. We also experiment with combining these approaches using a two-step pre-training regime. In particular, we fine-tune the pre-trained BERTL on MultiNLI, and then fine-tune the resulting model again on the BoolQ train set. We found decreasing the number of training epochs to 3 resulted in a slight improvement when using the model pre-trained on MultiNLI.", "We show the test set results for this model, and some other pre-training variations, in Table . For these results we train five versions of each model using different training seeds, and show the model that had the best dev-set performance.", "Given how extensively the BERTL model has been pre-trained, and how successful it has been across many NLP tasks, the additional gain of 3.5 points due to using MultiNLI is remarkable. This suggests MultiNLI contains signal orthogonal to what is found in BERT's unsupervised objectives." ], [ "In Figure 2, we graph model accuracy as more of the training data is used for fine-tuning, both with and without initially pre-training on MultiNLI. Pre-training on MultiNLI gives at least a 5-6 point gain, and nearly a 10 point gain for BERTL when only using 1000 examples. For small numbers of examples, the recurrent model with MultiNLI pre-training actually out-performs BERTL." ], [ "We have introduced BoolQ, a new reading comprehension dataset of naturally occurring yes/no questions. We have shown these questions are challenging and require a wide range of inference abilities to solve. We have also studied how transfer learning performs on this task, and found crowd-sourced entailment datasets can be leveraged to boost performance even on top of language model pre-training. Future work could include building a document-level version of this task, which would increase its difficulty and its correspondence to an end-user application.", "" ], [ "We include a number of randomly selected examples from the BoolQ train set in Figure FIGREF19 . For each example we show the question in bold, followed by the answer in parentheses, and then the passage below." ], [ "Our recurrent model is a standard model from the text pair classification literature, similar to the one used in the GLUE baseline BIBREF7 and the model from BIBREF38 . Our model has the following stages:", "Embed: Embed the words using a character CNN following what was done by BIBREF40 , and the fasttext crawl word embeddings BIBREF34 . Then run a BiLSTM over the results to get context-aware word hypothesis embeddings INLINEFORM0 and premise embeddings INLINEFORM1 .", "Co-Attention: Compute a co-attention matrix, INLINEFORM0 , between the hypothesis and premise where INLINEFORM1 , INLINEFORM2 is elementwise multiplication, and INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are weights to be learned.", "Attend: For each row in INLINEFORM0 , apply the softmax operator and use the results to compute a weighed sum of the hypothesis embeddings, resulting in attended vectors INLINEFORM1 . We use the transpose of INLINEFORM2 to compute vectors INLINEFORM3 from the premise embeddings in a similar manner.", "Pool: Run another BiLSTM over INLINEFORM0 to get embeddings INLINEFORM1 . Then pool these embeddings by computing attention scores INLINEFORM2 , INLINEFORM3 , and then the sum INLINEFORM4 = INLINEFORM5 . Likewise we compute INLINEFORM6 from the premise.", "Classify: Finally we feed INLINEFORM0 into a fully connected layer, and then through a softmax layer to predict the output class.", "We apply dropout at a rate of 0.2 between all layers, and train the model using the Adam optimizer BIBREF39 . The learning rate is decayed by 0.999 every 100 steps. We use 200 dimensional LSTMs and a 100 dimensional fully connected layer." ] ], "section_name": [ "Introduction", "Related Work", "The BoolQ Dataset", "Data Collection", "Analysis", "Annotation Quality", "Question Types", "Types of Inference", "Discussion", "Training Yes/No QA Models", "Shallow Models", "Neural Models", "Question/Passage Only Results", "Transfer Learning Results", "Multi-Step Transfer Results", "Sample Efficiency", "Conclusion", "Randomly Selected Examples", "Recurrent Model" ] }
{ "answers": [ { "annotation_id": [ "5e90675cae548d25dc1b0665ced21ff19a8da6e4" ], "answer": [ { "evidence": [ "Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors.", "Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors.\n\nOur OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 ." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "8dc8cf90f7f5babb541790c8dfb02e8de03b0ace" ], "answer": [ { "evidence": [ "Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.", "Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.", "Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable\" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes\" or “no\". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully." ], "extractive_spans": [], "free_form_answer": "Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable\" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes\" or “no\"", "highlighted_evidence": [ "Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.\n\nQuestions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.\n\nAnnotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable\" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes\" or “no\". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "44d233736d6e5a34276978c77b4721aa21521cd3" ], "answer": [ { "evidence": [ "We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set. “Yes” answers are slightly more common (62.31% in the train set). The queries are typically short (average length 8.9 tokens) with longer passages (average length 108 tokens)." ], "extractive_spans": [ " 16k questions" ], "free_form_answer": "", "highlighted_evidence": [ "We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "did they use other pretrained language models besides bert?", "how was the dataset built?", "what is the size of BoolQ dataset?" ], "question_id": [ "5871d258f66b00fb716065086f757ef745645bfe", "c554a453b6b99d8b59e4ef1511b1b506ff6e5aa4", "10210d5c31dc937e765051ee066b971b6f04d3af" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: Example yes/no questions from the BoolQ dataset. Each example consists of a question (Q), an excerpt from a passage (P), and an answer (A) with an explanation added for clarity.", "Table 1: Question categorization of BoolQ. Question topics are shown in the top half and question types are shown in the bottom half.", "Table 2: Kinds of reasoning needed in the BoolQ dataset.", "Table 3: Transfer learning results on the BoolQ dev set after fine-tuning on the BoolQ training set. Results are averaged over five runs. In all cases directly using the pre-trained model without fine-tuning did not achieve results better than the majority baseline, so we do not include them here.", "Table 4: Test set results on BoolQ, “+MultiNLI” indicates models that were additionally pre-trained on MultiNLI before being fine-tuned on the train set.", "Figure 2: Accuracy for various models on the BoolQ dev set as the number of training examples varies." ], "file": [ "1-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "9-Figure2-1.png" ] }
[ "how was the dataset built?" ]
[ [ "1905.10044-Data Collection-2", "1905.10044-Data Collection-1", "1905.10044-Data Collection-3" ] ]
[ "Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable\" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes\" or “no\"" ]
519
1708.04557
Database of Parliamentary Speeches in Ireland, 1919-2013
We present a database of parliamentary debates that contains the complete record of parliamentary speeches from D\'ail \'Eireann, the lower house and principal chamber of the Irish parliament, from 1919 to 2013. In addition, the database contains background information on all TDs (Teachta D\'ala, members of parliament), such as their party affiliations, constituencies and office positions. The current version of the database includes close to 4.5 million speeches from 1,178 TDs. The speeches were downloaded from the official parliament website and further processed and parsed with a Python script. Background information on TDs was collected from the member database of the parliament website. Data on cabinet positions (ministers and junior ministers) was collected from the official website of the government. A record linkage algorithm and human coders were used to match TDs and ministers.
{ "paragraphs": [ [ "Almost all political decisions and political opinions are, in one way or another, expressed in written or spoken texts. Great leaders in history become famous for their ability to motivate the masses with their speeches; parties publish policy programmes before elections in order to provide information about their policy objectives; parliamentary decisions are discussed and deliberated on the floor in order to exchange opinions; members of the executive in most political systems are legally obliged to provide written or verbal answers to questions from legislators; and citizens express their opinions about political events on internet blogs or in public online chats. Political texts and speeches are everywhere that people express their political opinions and preferences.", "It is not until recently that social scientists have discovered the potential of analyzing political texts to test theories of political behavior. One reason is that systematically processing large quantities of textual data to retrieve information is technically challenging. Computational advances in natural language processing have greatly facilitated this task. Adaptation of such techniques in social science – for example, Wordscore BIBREF0 , BIBREF1 or Wordfish BIBREF2 – now enable researchers to systematically compare documents with one another and extract relevant information from them. Applied to party manifestos, for which most of these techniques have been developed, these methods can be used to evaluate the similarity or dissimilarity between manifestos, which can then be used to derive estimates about parties' policy preferences and their ideological distance to each other.", "One area of research that increasingly makes use of quantitative text methods are studies of legislative behavior and parliaments BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . Only a few parliaments in the world use roll-call votes (the recording of each legislator's decision in a floor vote) that allow for the monitoring of individual members' behavior. In all other cases, contributions to debates are the only outcome that can be observed from individual members. Using such debates for social science research, however, is often limited by data availability. Although most parliaments keep written records of parliamentary debates and often make such records electronically available, they are never published in formats that facilitate social science research. A significant amount of labor is usually required to collect, clean and organize parliamentary records before they can be used for analytical purposes, often requiring technical skills that many social scientists lack.", "The purpose of this paper is to present a new database of parliamentary debates to overcome precisely this barrier. Our database contains all debates as well as questions and answers in Dáil Éireann, covering almost a century of political discourse from 1919 to 2013. These debates are organized in a way that allows users to search by date, topics or speaker. More importantly, and lacking in the official records of parliamentary debates, we have identified all speakers and linked their debate contributions to the information on party affiliation and constituencies from the official members database. This enables researchers to retrieve member-specific speeches on particular topics or within a particular timeframe. Furthermore, all data can be retrieved and stored in formats that can be accessed using commonly used statistical software packages.", "In addition to documenting this database, we also present three applications in which we make use of the new data (Section SECREF3 ). In the first study, we analyze budget speeches delivered by all finance ministers from 1922 to 2008 (Section SECREF11 ) and show how the policy agenda and ministers' policy preferences have changed over time (Section SECREF16 ). In the second application we compare contributions that were made on one particular topic: the 2008 budget debate (Section SECREF20 ). Here we demonstrate how text analytics can be used to estimate members' policy preferences on a dimension that represents pro- versus anti-government attitudes. Finally, we estimate all contributions from members of the 26th government that formed as a coalition between Fianna Fáil and the Progressive Democrats in 2002. Here we estimate the policy positions of all cabinet ministers on a pro- versus anti-spending dimension and show that positions on this dimension are highly correlated with the actual spending levels of each ministerial department (Section SECREF25 )." ], [ "Parliamentary debates in Dáil Éireann are collected by the Oireachtas' Debates Office and published as the Official Record. The Debates Office records and transcribes all debates and then publishes them both in printed as well as in digital form. All debates are then published on Oireachtas' website as single HTML files. At the time of writing, the official debates website contains 549,292 HTML files. The content of all these HTML files forms the data source for our database. It is obviously impossible to hand-code that much information. We therefore wrote a computer script that automated the processing of all files. This script is able to find all debate contributions and the names of all speakers in each file. In addition, it retrieves the date as well as the topic of each debate.", "As already explained above, the official online version of the Official Records does not provide information about speakers besides their name. Each speaker's name is “hard coded” into the HTML files and not linked to the information in the official members database. In addition, speaker names are not coded consistently, hence making it difficult to collect speeches from a particular deputy. Our goal was to identify every single speaker name that appears in the Official Record and integrate parliamentary speeches with information about deputies' party affiliation, constituency, age and profession from the official members database into a single database. We therefore used an automated record-linkage procedure to identify every single speaker.", "The final database contains all debates and written answers from the first meeting of the Dáil on 21 January 1919 through to 28 March 2013, covering every Dáil session that has met during this period. In total, the database contains 4,443,713 individual contributions by 1,178 TDs. The data is organized in a way that facilitates analysis for substantive questions of interest to social scientists. Every row in the data set is one contribution with columns containing information on the following variables:" ], [ "In the previous section, we have explained the structure of the database. In the following three sections we demonstrate how the data can be used for social science research. We do this by demonstrating three different applications. In the first application, we analyze the budget speeches of all finance ministers from 1922 to 2008. Budget speeches are delivered by Finance Ministers once a year, with the exception of emergency budgets. Analyzing this data, we show how policy agendas and ministers' fiscal preferences have changed over time. In the second application, we construct a data set that resembles a cross-sectional analysis as we retrieve all speeches from one particular year and on one particular topic from our database: the 2008 budget debate. This data structure enables us to estimate the policy positions of all speakers who contributed to the budget debate and to compare how similar or dissimilar their preferences were. We find that policy positions are clustered into two groups: the government and the opposition; but we also find considerable variation within each group. Finally, we take all contributions made during the term of one government and use the data to estimate the policy positions of all cabinet members on a dimension representing pro- versus anti-spending. We demonstrate the validity of estimated policy positions by comparing them against actual spending levels of each cabinet ministers' department and show that the two measures are almost perfectly correlated with each other." ], [ "The quantitative analysis of text is primarily based on the proposition that preference profiles of speakers can be constructed from their word frequencies BIBREF15 , BIBREF16 . This makes word frequencies the most important data input to almost all existing methods of text analysis. Word frequencies can be easily visualized as word clouds. These word clouds show the most frequently used words in a text with font size being proportional to frequency of appearance. Despite their simplicity, word clouds can be used as a first descriptive view of the data. Here we look at word clouds for the speeches made by Irish Ministers for Finance. We have extracted the budget speeches of all finance ministers from our database, the first being Cosgrave's speech in April 1923, and the latest being Lenihan's speech in October 2008. In total, there are 90 speeches given by 23 different finance ministers for whom we have generated word clouds as shown in Figure FIGREF12 .", "One way to look at Figure FIGREF12 is to consider that each individual word cloud panel presents a snapshot into the preference profiles of individual ministers. With taxation being the key instrument of fiscal policy it is unsurprising that the word “tax” is on average the most frequently used word across all Ministers for Finance. We can also discern that frequency of references to “government” has been uneven over time with relatively high usage in the 1960s to 1980s and then subsequent decline (apart from Quinn's tenure) until the later speeches of Cowen and particularly Lenihan.", "What is more clearly evident is the change in the number of unique words used by different ministers. This reflects the fact that some budget speeches were very short, while others were long and covered many distinct topics. The easiest example is to compare speeches by two consecutive ministers: Cowen and Lenihan. Word clouds reflect the sheer multitude of problems facing the country that needed to be addressed by Lenihan compared to the relatively “quieter” (on average) three budgets delivered by Cowen.", "Overall, while catchy word clouds can only be used as easy first-cut visualizations of the data, rather than methods for any meaningful analysis. One thing that becomes readily apparent from Figure FIGREF12 is that word clouds do not facilitate systematic comparison of documents and their content with one another. Next, we show how our data facilitates the application of relatively simple text analysis techniques to answer more complex empirical questions without the ambiguity in interpretation that is inherent in word clouds." ], [ "Wordfish BIBREF2 is a method that combines Item Response Theory BIBREF17 with text classification. Wordfish assumes that there is a latent policy dimension and that each author has a position on this dimension. Words are assumed to be distributed over this dimension such that INLINEFORM0 , where INLINEFORM1 is the count of word INLINEFORM2 in document INLINEFORM3 at time INLINEFORM4 . The functional form of the model is assumed to be INLINEFORM5 ", "where INLINEFORM0 are fixed effects to control for differences in the length of speeches and INLINEFORM1 are fixed effects to control for the fact that some words are used more often than others in all documents. INLINEFORM2 are the estimates of authors' position on the latent dimension and INLINEFORM3 are estimates of word-weights that are determined by how important specific words are in discriminating documents from each other. In this model each document is treated as a separate actor's position and all positions are estimated simultaneously. If a minister maintains a similar position from one budget speech to the next, this means that words with similar frequencies were used over time. At the same time any movement detected by the model towards a position held by, for example, his predecessor, means that the minister's word choice is now much closer to his predecessor than to his own word usage in the previous budget speech. The identification strategy for the model also sets the mean of all positions to 0 and the standard deviation to 1, thus allowing over time a change in positions relative to the mean with the total variance of all positions over time fixed BIBREF2 . Effectively this standardizes the results and allows for the comparison of positions over time on a comparable scale.", "Before including documents in the analysis, we have removed all numbers, punctuation marks, and stop words. In addition, we follow the advice in BIBREF18 and delete words that appear in less than 20% of all speeches. We do this in order to prevent words that are specific to a small time period (and hence only appear in a few speeches) from having a large impact on discriminating speeches from each other. Figure FIGREF17 shows the results of estimation, with an overlaid regression line.", "The results in Figure FIGREF17 indicate a concept drift – the gradual change over time of the underlying concept behind the text categorization class BIBREF19 . In the political science text scaling literature, this issue is known as agenda shift BIBREF18 . In supervised learning models like Wordscore, this problem has typically been dealt with by estimating text models separately for each time period BIBREF20 , BIBREF21 , where the definition of the dimensions remains stable through the choice of training documents. However, this approach is not easily transferrable to inductive techniques like Wordfish, where there may be substantively different policy dimensions at different time periods, rendering comparison of positions over time challenging, if not impossible. A clear presence of the concept drift issue in Wordfish estimation should be a cautionary note for using the approach with time series data, even though the original method was specifically designed to deal with time-series data as indicated in the title of the paper BIBREF2 .", "Looking at Figure FIGREF17 we can also observe that some ministers have similar preference profiles while others differ significantly. For example, Ahern and Reynolds are very similar in their profile but differ from a group consisting of Quinn, McCreevy, Cowen, and Lenihan who are very close to each other. There also appears to be a dramatic shift in agenda between the tenures of Lynch and Haughey (and also during Taoiseach Lynch's delivery of the budget speech for the Minister for Finance Charles Haughey in 1970). Overall, it appears that topics covered in budget speeches develop in waves, with clear bands formed by, for example, Lenihan, Cowen, McCreevy and Quin; Ahern and Reynolds; MacSharry, Dukes, Bruton, Fitzgerald, O'Kennedy and Colley; R. Ryan, Colley, Lynch (for Haughey); MacEntee, McGilligan and Aiken; Blythe and MacEntee.", "One intuitive interpretation of our Wordfish results is that budget speeches by finance ministers are related to underlying macroeconomic dynamics in the country. We consider the relationship between estimated policy positions of Minsters and three core economic indicators: unemployment, inflation, and per capita GDP growth rates. Figure FIGREF18 shows the three economic indicators, inflation (1923–2008), GDP growth (annual %; 1961–2008) and unemployment rate (1956–2008), over time.", "Figure FIGREF19 show Ministers' estimated positions plotted against the three indicators.", "As expected, the results presented here show that the policy positions of some Ministers can be partly explained by the contemporaneous economic situation in the country. However, the fact that some of the Ministers are clear outliers highlights the effect of individual characteristics on policy-making. One of the avenues for research that arises from this exercise is to analyze the determinants of these individual idiosyncrasies, possibly looking at education, class, and previous ministerial career. Such questions can now be easily investigated by researchers using our database." ], [ "In the previous section, we used budget speeches from each year and compared them over time. In this section, we restrict the analysis to a single year but take multiple speeches made on the same topic. More specifically, we estimate the preferences of all speakers who participated in the debate over the 2008 budget. We extract these speeches from the database by selecting all contributions to the topic “Financial Resolution” in year 2007. This leaves us with a total of 22 speakers from all five parties. Table TABREF22 shows the speeches included in the analysis.", "To estimate speakers' position we use Wordscore BIBREF1 – a version of the Naive Bayes classifier that is deployed for text categorization problems BIBREF22 . In a similar application, BIBREF1 have already demonstrated that Wordscore can be effectively used to derive estimates of TDs policy positions. As in the example above, we pre-process documents by removing all numbers and interjections.", "Wordscore uses two documents with well-known positions as reference texts (training set). The positions of all other documents are then estimated by comparing them to these reference documents. The underlying idea is that a document that, in terms of word frequencies, is similar to a reference document was produced by an author with similar preferences. The selection of reference documents furthermore determines the (assumed) underlying dimension for which documents' positions are estimated. For example, using two opposing documents on climate change would scale documents on the underlying dimension “climate politics”. It has also been shown that under certain assumptions the Wordscore algorithm is related to the Wordfish algorithm used in the previous section BIBREF23 .", "We assume that contributions in budget debates have the underlying dimension of being either pro or contra the current government. Our interpretation from reading the speeches is that, apart from the budget speech itself, all other speeches largely either attack or defend the incumbent government and to a lesser extent debate the issues of the next budget. We can therefore use contributions during the budget debate as an indicator for how much a speaker is supporting or opposing the current government, here consisting of Fianna Fáil and the Green Party. As our reference texts we therefore chose the speeches of Bertie Ahern (Taoiseach) and Enda Kenny (FG party leader). The former should obviously be strongly supportive of the government while the latter, as party leader of the largest opposition party, should strongly oppose it. Figure FIGREF24 shows estimated positions for all speakers grouped by party affiliation.", "The estimated positions are clustered into two groups, one representing the government and one the opposition. Within the government cluster, Deputy Batt O'Keeffe (Minister of State at the Department of Environment, Heritage and Local Government) is estimated to be the most supportive speaker for the government, while Deputy Pat Carey (Minister of State at the Department of Community, Rural and Gaeltacht Affairs) and Deputy Sean Ardagh are estimated to be relatively closer to the opposition. Deputy John Gormley, leader of the Green party and Minister for the Environment, Heritage and Local Government in the FF-Green coalition, is estimated to be in the centre of the government cluster. Among all positions in the opposition cluster, the speech of Róisín Shortall is the closest to the government side, with Neville being the farthest out." ], [ "The government cabinet in parliamentary democracies is at the core of political decision making, yet it is difficult to model intra-cabinet bargaining as the preferences of most cabinet members are unknown. Cabinet decisions are usually made behind closed doors and the doctrine of joint cabinet responsibility prevents ministers from publicly opposing decisions, even if they disagree with them. Using ministers' speeches and their responses during question times offer a unique opportunity to infer their preferences on policy dimensions of interest. In our final application we estimate policy positions for all cabinet members in the 26th government. The dimension on which positions are estimated represents pro- versus contra-government spending (or spending left-right). We show that estimated positions are highly correlated with departments' actual spending, which means that estimated positions are not only meaningful but can also be used to predict actual policy-making.", "The 26th government was formed as a coalition between Fianna Fáil and the Progressive Democrats after the election for the 29th Dáil in 2002. The cabinet was reshuffled on 29 September 2004 and we only include ministers' speeches until that date. Table TABREF26 lists all cabinet members (and their portfolios) included in our analysis.", "To estimate ministers' policy positions, we retrieve the complete record of each minister's contribution in parliament from the first meeting on 6 June 2002 until the date of the reshuffle. On average, each minister made 3,643 contributions with an average number of 587,077 words. Table TABREF27 provides summary statistics for all ministers, sorted by total word count.", "We again use Wordscore BIBREF0 , BIBREF1 to estimate positions as it allows us to define the underlying policy dimension by choosing appropriate reference texts. We estimate positions on a social-economic left-right dimension that reflects pro- versus contra-government spending. We therefore use contributions by Mary Coughlan (Minister for Social and Family Affairs) and Charlie McCreevy (Minister for Finance) as reference texts, assuming that the former is more in favor of spending than the latter. Figure FIGREF28 shows the results of estimation grouped by the two parties.", "As expected, we find that the two PD members, Mary Harney and Michael McDowell, are at the right side of the dimension. We estimate the most left-wing members to be Éamon Ó Cuív (Minister for Community, Rural and Gaeltacht Affairs), Noel Dempsey (Minister for Education and Science), and Micheál Martin (Minister for Health and Children). The most right-wing members are John O'Donoghue (Minister for Arts, Sport and Tourism), Charlie McCreevy (whose contributions we used as right-wing reference text), and Michael Smith (Minister for Defense).", "How valid are these estimated positions? In order to have substantive meaning, our estimates should be able to predict political decisions on the same policy dimension. We therefore use ministers' estimated positions to predict their departmental spending level BIBREF3 . Our outcome variable is each department's spending as share of the total budget in 2004 modeled as a function of estimated policy positions. We conjecture that more left-wing ministers should have higher spending levels than right-wing ministers, which we test by estimating DISPLAYFORM0 ", "via ordinary least-square regression. Figure FIGREF31 shows the two variables plotted against each other together with the estimated regression line from equation EQREF29 . In one analysis shown we include all cabinet members. In the other, we exclude non-spending departments with small budgets, such as the office of the Taoiseach or the Department of Foreign Affairs.", "Figure FIGREF31 reveals that there is a negative, albeit weak, relationship between estimated positions and spending, with more left-wing cabinet members having higher spending levels than right-wing members. The correlation between the two variables is -0.53 ( INLINEFORM0 ) which is not significant at the 0.05 level. However, if we only take members from high-spending departments into account (second pane in Figure FIGREF31 ) we find a significant linear relationship between the two variables with a correlation coefficient of -0.95 ( INLINEFORM1 ). This result provides some level of validation for our data and analysis.", "These results also open up an intriguing question about the endogeneity of observable policy preferences of ministers. Do higher spending portfolios receive more pro-spending ministers or do ministers adapt their policy preferences after appointment and literally grow into the job? This and related questions are outside the scope of this paper and can be pursued by researchers with the help of our database of parliamentary speeches." ], [ "Policy preferences of individual politicians (ministers or TDs in general), are inherently unobservable. However, we have abundant data on speeches made by political actors. The latest developments in automated text analysis techniques allow us to estimate the policy positions of individual actors from these speeches.", "In relation to Irish political actors such estimation has been hindered by the structure of the available data. While all speeches made in Dáil Éireann are dutifully recorded, the architecture of the data set, where digitized versions of speeches are stored, makes it impossible to apply any of the existing text analysis software. Speeches are currently stored by Dáil Éireann in more than half a million separate HTML files with entries that are not related to one another.", "In this paper we present a new database of speeches that was created with the purpose of allowing the estimation of policy preferences of individual politicians. For that reason we created a relational database where speeches are related to the members database and structured in terms of dates, topics of debates, and names of speakers, their constituency and party affiliation. This gives the necessary flexibility to use available text scaling methods in order to estimate the policy positions of actors.", "We also present several examples for which this data can be used. We show how to estimate the policy positions of all Irish Ministers for Finance, and highlight how this can lead to interesting research questions in estimating the determinants of their positions. We show that for some ministers the position can be explained by the country's economic performance, while the preferences of other ministers seem to be idiosyncratic. In another example we estimate positions of individual TDs in a budget debate, followed by the estimation of policy positions of cabinet members of the 26th Government.", "With the introduction of our database, we aim to make text analysis an easy and accessible tool for social scientists engaged in empirical research on policy-making that requires estimation of policy preferences of political actors." ] ], "section_name": [ "Introduction", "Overview of Database Content", "Analyzing the Content of Parliamentary Debates", "The Content of Budget Speeches in Historical Perspective", "Estimation of Finance Ministers' Policy Positions", "Speakers' Policy Position in the 2008 Budget Debate", "Ministers' policy position in the 26th government", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "45153f9f1b03de7a91909986aba397780858f75c" ], "answer": [ { "evidence": [ "To estimate speakers' position we use Wordscore BIBREF1 – a version of the Naive Bayes classifier that is deployed for text categorization problems BIBREF22 . In a similar application, BIBREF1 have already demonstrated that Wordscore can be effectively used to derive estimates of TDs policy positions. As in the example above, we pre-process documents by removing all numbers and interjections." ], "extractive_spans": [], "free_form_answer": "Remove numbers and interjections", "highlighted_evidence": [ "As in the example above, we pre-process documents by removing all numbers and interjections." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "" ], "paper_read": [ "" ], "question": [ "what processing was done on the speeches before being parsed?" ], "question_id": [ "5d9b088bb066750b60debfb0b9439049b5a5c0ce" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "" ], "topic_background": [ "" ] }
{ "caption": [ "Fig. 1. Finance ministers’ policy positions as estimated from all budget speeches (1922–2009) with an overlaid linear regression line.", "Fig. 2. The Irish economy over time: Inflation (1923–2008), Per Capita GDP growth (annual %; 1961–2008) and unemployment rate (1956–2008).", "Fig. 4. Estimated positions of all speakers in the 2008 budget debate. Estimated dimension represents pro- versus anti-government positions. Scaling of x-axis is arbitrary. Speeches of Bertie Ahern (FF, Taoiseach) and Enda Kenny (FG party leader) were used as reference texts for being respectively pro- or anti-government.", "Fig. 3. Estimated finance ministers’ positions against inflation (1923–2008), GDP growth (annual %; 1961–2008), and unemployment rate (1956–2008), with an overlaid linear regression line.", "Fig. 5. Estimated positions for all cabinet members in the 26th government (29th Dáil) using Wordscore. Positions are jittered along the y-axis. Estimation is based on each minister’s contribution in Dáil Éireann before the cabinet reshuffle on 29 September 2004. Speeches by Mary Coughlan (Minister for Social and Family Affairs) and Charlie McCreevy (Minister for Finance) are used as left and right reference texts, respectively", "Fig. 6. Cabinet ministers’ policy position plotted departmental spending as share of total government budget in 2004. For the analysis of high-spending departments we remove the Office of the Taoiseach or the Department of Foreign Affairs, with the remaining eight departments accounting for more than 95 per cent of the total budget in 2004" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure4-1.png", "4-Figure3-1.png", "5-Figure5-1.png", "6-Figure6-1.png" ] }
[ "what processing was done on the speeches before being parsed?" ]
[ [ "1708.04557-Speakers' Policy Position in the 2008 Budget Debate-1" ] ]
[ "Remove numbers and interjections" ]
520
2003.12932
User Generated Data: Achilles' heel of BERT
Pre-trained language models such as BERT are known to perform exceedingly well on various NLP tasks and have even established new State-Of-The-Art (SOTA) benchmarks for many of these tasks. Owing to its success on various tasks and benchmark datasets, industry practitioners have started to explore BERT to build applications solving industry use cases. These use cases are known to have much more noise in the data as compared to benchmark datasets. In this work we systematically show that when the data is noisy, there is a significant degradation in the performance of BERT. Specifically, we performed experiments using BERT on popular tasks such sentiment analysis and textual similarity. For this we work with three well known datasets - IMDB movie reviews, SST-2 and STS-B to measure the performance. Further, we examine the reason behind this performance drop and identify the shortcomings in the BERT pipeline.
{ "paragraphs": [ [ "In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3.", "At a high level, BERT’s pipelines looks as follows: given a input sentence, BERT tokenizes it using wordPiece tokenizerBIBREF4. The tokens are then fed as input to the BERT model and it learns contextualized embeddings for each of those tokens. It does so via pre-training on two tasks - Masked Language Model (MLM)BIBREF0 and Next Sentence Prediction (NSP)BIBREF0.", "The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same.", "This evaluation is motivated from the business use case we are solving where we are building a dialogue system to screen candidates for blue collar jobs. Our candidate user base, coming from underprivileged backgrounds, are often high school graduates. This coupled with ‘fat finger’ problem over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to the dialogue system. Hence, for this work we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable across various use cases in industry - be it be sentiment classification on twitter data or topic detection of a web forum.", "To simulate noise in the data, we begin with a clean dataset and introduce spelling errors in a fraction of words present in it. These words are chosen randomly. We will explain this process in detail later. Spelling mistakes introduced mimic the typographical errors in the text introduced by our users. We then use the BERT model for tasks using both clean and noisy datasets and compare the results. We show that the introduction of noise leads to a significant drop in performance of the BERT model for the task at hand as compared to clean dataset. We further show that as we increase the amount of noise in the data, the performance degrades sharply." ], [ "In recent years pre-trained language models ((e.g. ELMoBIBREF5, BERTBIBREF0) have made breakthroughs in several natural language tasks. These models are trained over large corpora that are not human annotated and are easily available. Chief among these models is BERTBIBREF0. The popularity of BERT stems from its ability to be fine-tuned for a variety of downstream NLP tasks such as text classification, regression, named-entity recognition, question answeringBIBREF0, machine translationBIBREF6 etc. BERT has been able to establish State-of-the-art (SOTA) results for many of these tasks. People have been able to show how one can leverage BERT to improve searchBIBREF7.", "Owing to its success, researchers have started to focus on uncovering drawbacks in BERT, if any. BIBREF8 introduce TEXTFOOLER, a system to generate adversarial text. They apply it to NLP tasks of text classification and textual entailment to attack the BERT model. BIBREF9 evaluate three models - RoBERTa, XLNet, and BERT in Natural Language Inference (NLI) and Question Answering (QA) tasks for robustness. They show that while RoBERTa, XLNet and BERT are more robust than recurrent neural network models to stress tests for both NLI and QA tasks; these models are still very fragile and show many unexpected behaviors. BIBREF10 discuss length-based and sentence-based misclassification attacks for the Fake News Detection task trained using a context-aware BERT model and they show 78% and 39% attack accuracy respectively.", "Our contribution in this paper is to answer that can we use large language models like BERT directly over user generated data." ], [ "For our experiments, we use pre-trained BERT implementation as given by huggingface transformer library. We use the BERTBase uncased model. We work with three datasets namely - IMDB movie reviewsBIBREF11, Stanford Sentiment Treebank (SST-2) BIBREF12 and Semantic Textual Similarity (STS-B) BIBREF13.", "IMDB dataset is a popular dataset for sentiment analysis tasks, which is a binary classification problem with equal number of positive and negative examples. Both STS-B and SST-2 datasets are a part of GLUE benchmark[2] tasks . In STS-B too, we predict positive and negative sentiments. In SST-2 we predict textual semantic similarity between two sentences. It is a regression problem where the similarity score varies between 0 to 5. To evaluate the performance of BERT we use standard metrics of F1-score for imdb and STS-B, and Pearson-Spearman correlation for SST-2.", "In Table TABREF5, we give the statistics for each of the datasets.", "We take the original datasets and add varying degrees of noise (i.e. spelling errors to word utterances) to create datasets for our experiments. From each dataset, we create 4 additional datasets each with varying percentage levels of noise in them. For example from IMDB, we create 4 variants, each having 5%, 10%, 15% and 20% noise in them. Here, the number denotes the percentage of words in the original dataset that have spelling mistakes. Thus, we have one dataset with no noise and 4 variants datasets with increasing levels of noise. Likewise, we do the same for SST-2 and STS-B.", "All the parameters of the BERTBase model remain the same for all 5 experiments on the IMDB dataset and its 4 variants. This also remains the same across other 2 datasets and their variants. For all the experiments, the learning rate is set to 4e-5, for optimization we use Adam optimizer with epsilon value 1e-8. We ran each of the experiments for 10 and 50 epochs." ], [ "Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs.", "Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively.", "Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively." ], [ "It is clear from the above plots that as we increase the percentage of error, for each of the three tasks, we see a significant drop in BERT’s performance. Also, from the plots it is evident that the reason for this drop in performance is introduction of noise (spelling mistakes). After all we get very good numbers, for each of the three tasks, when there is no error (0.0 % error). To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model.", "When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance.", "To understand this better, let us look into two examples, one each from the IMDB and STS-B datasets respectively, as shown below. Here, (a) is the sentence as it appears in the dataset ( before adding noise) while (b) is the corresponding sentence after adding noise. The mistakes are highlighted with italics. The sentences are followed by the corresponding output of the WordPiece tokenizer on these sentences: In the output ‘##’ is WordPiece tokenizer’s way of distinguishing subwords from words. ‘##’ signifies subwords as opposed to words.", "Example 1 (imdb example):", "“that loves its characters and communicates something rather beautiful about human nature” (0% error)", "“that loves 8ts characters abd communicates something rathee beautiful about human natuee” (5% error)", "Output of wordPiece tokenizer:", "['that', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human','nature'] (0% error IMDB example)", "['that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate','##s', 'something','rat', '##hee', 'beautiful', 'about', 'human','nat', '##ue', '##e'] (5% error IMDB example)", "Example 2(STS example):", "“poor ben bratt could n't find stardom if mapquest emailed himpoint-to-point driving directions.” (0% error)", "“poor ben bratt could n't find stardom if mapquest emailed him point-to-point drivibg dirsctioge.” (5% error)", "Output of wordPiece tokenizer:", "['poor', 'ben', 'brat', '##t', 'could', 'n', \"'\", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him','point', '-', 'to', '-', 'point', 'driving', 'directions', '.'] (0% error STS example)", "['poor', 'ben', 'brat', '##t', 'could', 'n', \"'\", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib','##g','dir','##sc', '##ti', '##oge', '.'] (5% error STS example)", "In example 1, the tokenizer splits communicates into [‘communicate’, ‘##s’] based on longest prefix matching because there is no exact match for “communicates” in BERT vocabulary. The longest prefix in this case is “communicate” and left over is “s” both of which are present in the vocabulary of BERT. We have contextual embeddings for both “communicate” and “##s”. By using these two embeddings, one can get an approximate embedding for “communicates”. However, this approach goes for a complete toss when the word is misspelled. In example 1(b) the word natuee (‘nature’ is misspelled) is split into ['nat', '##ue', '##e'] based on the longest prefix match. Combining the three embeddings one cannot approximate the embedding of nature. This is because the word nat has a very different meaning (it means ‘a person who advocates political independence for a particular country’). This misrepresentation in turn impacts the performance of downstream subcomponents of BERT bringing down the overall performance of BERT model. Hence, as we systematically introduce more errors, the quality of output of the tokenizer degrades further, resulting in the overall performance drop.", "Our results and analysis shows that one cannot apply BERT blindly to solve NLP problems especially in industrial settings. If the application you are developing gets data from channels that are known to introduce noise in the text, then BERT will perform badly. Examples of such scenarios are applications working with twitter data, mobile based chat system, user comments on platforms like youtube, reddit to name a few. The reason for the introduction of noise could vary - while for twitter, reddit it's often deliberate because that is how users prefer to write, while for mobile based chat it often suffers from ‘fat finger’ typing error problem. Depending on the amount of noise in the data, BERT can perform well below expectations.", "We further conducted experiments with different tokenizers other than WordPiece tokenizer. For this we used stanfordNLP WhiteSpace BIBREF14 and Character N-gram BIBREF15 tokenizers. WhiteSpace tokenizer splits text into tokens based on white space. Character N-gram tokenizer splits words that have more than n characters in them. Thus, each token has at most n characters in them. The resultant tokens from the respective tokenizer are fed to BERT as inputs. For our case, we work with n = 6.", "Results of these experiments are presented in Table TABREF25. Even though wordPiece tokenizer has the issues stated earlier, it is still performing better than whitespace and character n-gram tokenizer. This is primarily because of the vocabulary overlap between STS-B dataset and BERT vocabulary." ], [ "In this work we systematically studied the effect of noise (spelling mistakes) in user generated text data on the performance of BERT. We demonstrated that as the noise increases, BERT’s performance drops drastically. We further investigated the BERT system to understand the reason for this drop in performance. We show that the problem lies with how misspelt words are tokenized to create a representation of the original word.", "There are 2 ways to address the problem - either (i) preprocess the data to correct spelling mistakes or (ii) incorporate ways in BERT architecture to make it robust to noise. The problem with (i) is that in most industrial settings this becomes a separate project in itself. We leave (ii) as a future work to fix the issues." ] ], "section_name": [ "Introduction", "Related Work", "Experiment", "Results", "Results ::: Key Findings", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "4586a85a0c6b7494ed0d33cf42a776425ebf3db4" ], "answer": [ { "evidence": [ "Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively.", "FLOAT SELECTED: Figure 5: Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset" ], "extractive_spans": [], "free_form_answer": "10 Epochs: pearson-Spearman correlation drops 60 points when error increase by 20%\n50 Epochs: pearson-Spearman correlation drops 55 points when error increase by 20%", "highlighted_evidence": [ "Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively.", "FLOAT SELECTED: Figure 5: Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "a9d965a94a17909ee4147998cf85a6d5c9d09f43" ], "answer": [ { "evidence": [ "Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs.", "Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively.", "FLOAT SELECTED: Figure 3: F1 score vs % of error for Sentiment analysis on IMDB dataset" ], "extractive_spans": [], "free_form_answer": "SST-2 dataset", "highlighted_evidence": [ "We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs.", "Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively.", "FLOAT SELECTED: Figure 3: F1 score vs % of error for Sentiment analysis on IMDB dataset" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "6e57952c2e8ac59bc53d4ec46781d590cc27444c" ], "answer": [ { "evidence": [ "The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same." ], "extractive_spans": [ " non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages" ], "free_form_answer": "", "highlighted_evidence": [ "It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "738deeb7036bbc3cc85bee72b7fc2d39aa62c29b" ], "answer": [ { "evidence": [ "It is clear from the above plots that as we increase the percentage of error, for each of the three tasks, we see a significant drop in BERT’s performance. Also, from the plots it is evident that the reason for this drop in performance is introduction of noise (spelling mistakes). After all we get very good numbers, for each of the three tasks, when there is no error (0.0 % error). To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model.", "When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance." ], "extractive_spans": [ "Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance." ], "free_form_answer": "", "highlighted_evidence": [ "To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model.", "When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "What is the performance change of the textual semantic similarity task when no error and maximum errors (noise) are present?", "Which sentiment analysis data set has a larger performance drop when a 10% error is introduced?", "What kind is noise is present in typical industrial data?", "What is the reason behind the drop in performance using BERT for some popular task?" ], "question_id": [ "7f9bc06cfa81a4e3f7df4c69a1afef146ed5a1cf", "58a340c338e41002c8555202ef9adbf51ddbb7a1", "0ca02893bda50007f7a76e7c8804101718fbb01c", "751aa2b1531a17496536887288699cc8d5c3cec9" ], "question_writer": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1: BERT architecture [1]", "Figure 2: The Transformer model architecture [2]", "Table 1: Number of utterances in each datasets", "Figure 3: F1 score vs % of error for Sentiment analysis on IMDB dataset", "Figure 4: F1 score vs % of error for Sentiment analysis on SST-2 data", "Figure 5: Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset", "Table 2: Comparative results on STS-B dataset with different tokenizers" ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "3-Table1-1.png", "4-Figure3-1.png", "4-Figure4-1.png", "5-Figure5-1.png", "6-Table2-1.png" ] }
[ "What is the performance change of the textual semantic similarity task when no error and maximum errors (noise) are present?", "Which sentiment analysis data set has a larger performance drop when a 10% error is introduced?" ]
[ [ "2003.12932-Results-2", "2003.12932-5-Figure5-1.png" ], [ "2003.12932-Results-1", "2003.12932-4-Figure3-1.png", "2003.12932-Results-0" ] ]
[ "10 Epochs: pearson-Spearman correlation drops 60 points when error increase by 20%\n50 Epochs: pearson-Spearman correlation drops 55 points when error increase by 20%", "SST-2 dataset" ]
522
2002.08307
Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Universal feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective methods for improving deep learning models without requiring more labeled data. A common paradigm is to pre-train a feature extractor on large amounts of data then fine-tune it as part of a deep learning model on some downstream task (i.e. transfer learning). While effective, feature extractors like BERT may be prohibitively large for some deployment scenarios. We explore weight pruning for BERT and ask: how does compression during pre-training affect transfer learning? We find that pruning affects transfer learning in three broad regimes. Low levels of pruning (30-40\%) do not affect pre-training loss or transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. High levels of pruning additionally prevent models from fitting downstream datasets, leading to further degradation. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability. We conclude that BERT can be pruned once during pre-training rather than separately for each task without affecting performance.
{ "paragraphs": [ [ "Pre-trained feature extractors, such as BERT BIBREF0 for natural language processing and VGG BIBREF1 for computer vision, have become effective methods for improving the performance of deep learning models. In the last year, models similar to BERT have become state-of-the-art in many NLP tasks, including natural language inference (NLI), named entity recognition (NER), sentiment analysis, etc. These models follow a pre-training paradigm: they are trained on a large amount of unlabeled text via a task that resembles language modeling BIBREF2, BIBREF3 and are then fine-tuned on a smaller amount of “downstream” data, which is labeled for a specific task. Pre-trained models usually achieve higher accuracy than any model trained on downstream data alone.", "The pre-training paradigm, while effective, still has some problems. While some claim that language model pre-training is a “universal language learning task\" BIBREF4, there is no theoretical justification for this, only empirical evidence. Second, due to the size of the pre-training dataset, BERT models tend to be slow and require impractically large amounts of GPU memory. BERT-Large can only be used with access to a Google TPU, and BERT-Base requires some optimization tricks such as gradient checkpointing or gradient accumulation to be trained effectively on consumer hardware BIBREF5. Training BERT-Base from scratch costs $\\sim $$7k and emits $\\sim $1438 pounds of CO$_2$ BIBREF6.", "Model compression BIBREF7, which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be used to trade accuracy for memory in some low-resource cases, such as deploying to smartphones for real-time prediction. The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible?", "To explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section.", "Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. This information is not equally useful to each task; tasks degrade linearly with pre-train loss, but at different rates. High levels of pruning, depending on the size of the downstream dataset, may additionally degrade performance by preventing models from fitting downstream datasets. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability or change the order of pruning by a meaningful amount.", "To our knowledge, prior work had not shown whether BERT could be compressed in a task-generic way, keeping the benefits of pre-training while avoiding costly experimentation associated with compressing and re-training BERT multiple times. Nor had it shown whether BERT could be over-pruned for a memory / accuracy trade-off for deployment to low-resource devices. In this work, we conclude that BERT can be pruned prior to distribution without affecting it's universality, and that BERT may be over-pruned during pre-training for a reasonable accuracy trade-off for certain tasks." ], [ "Neural network pruning involves examining a trained network and removing parts deemed to be unnecessary by some heuristic saliency criterion. One might remove weights, neurons, layers, channels, attention heads, etc. depending on which heuristic is used. Below, we describe three different lenses through which we might interpret pruning.", "Compression Pruning a neural network decreases the number of parameters required to specify the model, which decreases the disk space required to store it. This allows large models to be deployed on edge computing devices like smartphones. Pruning can also increase inference speed if whole neurons or convolutional channels are pruned, which reduces GPU usage.", "Regularization Pruning a neural network also regularizes it. We might consider pruning to be a form of permanent dropout BIBREF11 or a heuristic-based L0 regularizer BIBREF12. Through this lens, pruning decreases the complexity of the network and therefore narrows the range of possible functions it can express. The main difference between L0 or L1 regularization and weight pruning is that the former induce sparsity via a penalty on the loss function, which is learned during gradient descent via stochastic relaxation. It's not clear which approach is more principled or preferred. BIBREF13", "Interestingly, recent work used compression not to induce simplicity but to measure it BIBREF14.", "Sparse Architecture Search Finally, we can view neural network pruning as a type of sparse architecture search. BIBREF15 and BIBREF16 show that they can train carefully re-initialized pruned architectures to similar performance levels as dense networks. Under this lens, stochastic gradient descent (SGD) induces network sparsity, and pruning simply makes that sparsity explicit. These sparse architectures, along with the appropriate initializations, are sometimes referred to as “lottery tickets.”", "Sparse networks are difficult to train from scratch BIBREF17. However, BIBREF18 and BIBREF19 present methods to do this by allowing SGD to search over the space of possible subnetworks. Our findings suggest that these methods might be used to train sparse BERT from scratch." ], [ "In this work, we focus on weight magnitude pruning because it is one of the most fine-grained and effective pruning methods. It also has a compelling saliency criterion BIBREF8: if a weight is close to zero, then its input is effectively ignored, which means the weight can be pruned.", "Magnitude weight pruning itself is a simple procedure: 1. Pick a target percentage of weights to be pruned, say 50%. 2. Calculate a threshold such that 50% of weight magnitudes are under that threshold. 3. Remove those weights. 4. Continue training the network to recover any lost accuracy. 5. Optionally, return to step 1 and increase the percentage of weights pruned. This procedure is conveniently implemented in a Tensorflow BIBREF20 package, which we use BIBREF21.", "Calculating a threshold and pruning can be done for all network parameters holistically (global pruning) or for each weight matrix individually (matrix-local pruning). Both methods will prune to the same sparsity, but in global pruning the sparsity might be unevenly distributed across weight matrices. We use matrix-local pruning because it is more popular in the community. For information on other pruning techniques, we recommend BIBREF13 and BIBREF15." ], [ "BERT is a large Transformer encoder; for background, we refer readers to BIBREF22 or one of these excellent tutorials BIBREF23, BIBREF24." ], [ "BERT-Base consists of 12 encoder layers, each of which contains 6 prunable matrices: 4 for the multi-headed self-attention and 2 for the layer's output feed-forward network.", "Recall that self-attention first projects layer inputs into key, query, and value embeddings via linear projections. While there is a separate key, query, and value projection matrix for each attention head, implementations typically “stack” matrices from each attention head, resulting in only 3 parameter matrices: one for key projections, one for value projections, and one for query projections. We prune each of these matrices separately, calculating a threshold for each. We also prune the linear output projection, which combines outputs from each attention head into a single embedding.", "We prune word embeddings in the same way we prune feed-foward networks and self-attention parameters. The justification is similar: if a word embedding value is close to zero, we can assume it's zero and store the rest in a sparse matrix. This is useful because token / subword embeddings tend to account for a large portion of a natural language model's memory. In BERT-Base specifically, the embeddings account for $\\sim $21% of the model's memory.", "Our experimental code for pruning BERT, based on the public BERT repository, is available here." ], [ "We perform weight magnitude pruning on a pre-trained BERT-Base model. We select sparsities from 0% to 90% in increments of 10% and gradually prune BERT to this sparsity over the first 10k steps of training. We continue pre-training on English Wikipedia and BookCorpus for another 90k steps to regain any lost accuracy. The resulting pre-training losses are shown in Table TABREF27.", "We then fine-tune these pruned models on tasks from the General Language Understanding Evaluation (GLUE) benchmark, which is a standard set of 9 tasks that include sentiment analysis, natural language inference, etc. We avoid WNLI, which is known to be problematic. We also avoid tasks with less than 5k training examples because the results tend to be noisy (RTE, MRPC, STS-B). We fine-tune a separate model on each of the remaining 5 GLUE tasks for 3 epochs and try 4 learning rates: $[2, 3, 4, 5] \\times 10^{-5}$. The best evaluation accuracies are averaged and plotted in Figure FIGREF15. Individual task results are in Table TABREF27.", "BERT can be used as a static feature-extractor or as a pre-trained model which is fine-tuned end-to-end. In all experiments, we fine-tune weights in all layers of BERT on downstream tasks." ], [ "Pruning involves two steps: it deletes the information stored in a weight by setting it to 0 and then regularizes the model by preventing that weight from changing during further training.", "To disentangle these two effects (model complexity restriction and information deletion), we repeat the experiments from Section SECREF9 with an identical pre-training setup, but instead of pruning we simply set the weights to 0 and allow them to vary during downstream training. This deletes the pre-training information associated with the weight but does not prevent the model from fitting downstream datasets by keeping the weight at zero during downstream training. We also fine-tune on downstream tasks until training loss becomes comparable to models with no pruning. We trained most models for 13 epochs rather than 3. Models with 70-90% information deletion required 15 epochs to fit the training data. The results are also included in Figure FIGREF15 and Table TABREF27." ], [ "We might expect that BERT would be more compressible after downstream fine-tuning. Intuitively, the information needed for downstream tasks is a subset of the information learned during pre-training; some tasks require more semantic information than syntactic, and vice-versa. We should be able to discard the “extra\" information and only keep what we need for, say, parsing BIBREF25.", "For magnitude weight pruning specifically, we might expect downstream training to change the distribution of weights in the parameter matrices. This, in turn, changes the sort-order of the absolute values of those weights, which changes the order that we prune them in. This new pruning order, hypothetically, would be less degrading to our specific downstream task.", "To test this, we fine-tuned pre-trained BERT-Base on downstream data for 3 epochs. We then pruned at various sparsity levels and continued training for 5 more epochs (7 for 80/90% sparsity), at which point the training losses became comparable to those of models pruned during pre-training. We repeat this for learning rates in $[2, 3, 4, 5] \\times 10^{-5}$ and show the results with the best development accuracy in Figure FIGREF15 / Table TABREF27. We also measure the difference in which weights are selected for pruning during pre-training vs. downstream fine-tuning and plot the results in Figure FIGREF25." ], [ "Figure FIGREF15 shows that the first 30-40% of weights pruned by magnitude weight pruning do not impact pre-training loss or inference on any downstream task. These weights can be pruned either before or after fine-tuning. This makes sense from the perspective of pruning as sparse architecture search: when we initialize BERT-Base, we initialize many possible subnetworks. SGD selects the best one for pre-training and pushes the rest of the weights to 0. We can then prune those weights without affecting the output of the network." ], [ "Past 40% pruning, performance starts to degrade. Pre-training loss increases as we prune weights necessary for fitting the pre-training data (Table TABREF27). Feature activations of the hidden layers start to diverge from models with low levels of pruning (Figure FIGREF18). Downstream accuracy also begins to degrade at this point.", "We believe this observation may point towards a more principled stopping criterion for pruning. Currently, the only way to know how much to prune is by trial and (dev-set) error. Predictors of performance degradation while pruning might help us decide which level of sparsity is appropriate for a given trained network without trying many at once.", "Why does pruning at these levels hurt downstream performance? On one hand, pruning deletes pre-training information by setting weights to 0, preventing the transfer of the useful inductive biases learned during pre-training. On the other hand, pruning regularizes the model by keeping certain weights at zero, which might prevent fitting downstream datasets.", "Figure FIGREF15 and Table TABREF27 show information deletion is the main cause of performance degradation between 40 - 60% sparsity, since pruning and information deletion degrade models by the same amount. Information deletion would not be a problem if pre-training and downstream datasets contained similar information. However, pre-training is effective precisely because the pre-training dataset is much larger than the labeled downstream dataset, which allows learning of more robust representations.", "We see that the main obstacle to compressing pre-trained models is maintaining the inductive bias of the model learned during pre-training. Encoding this bias requires many more weights than fitting downstream datasets, and it cannot be recovered due to a fundamental information gap between pre-training and downstream datasets. The amount a model can be pruned is limited by the largest dataset the model has been trained on: in this case, the pre-training dataset. Practitioners should be aware of this; pruning may subtly harm downstream generalization without affecting training loss.", "We might consider finding a lottery ticket for BERT, which we would expect to fit the GLUE training data just as well as pre-trained BERT BIBREF27, BIBREF28. However, we predict that the lottery-ticket will not reach similar generalization levels unless the lottery ticket encodes enough information to close the information gap." ], [ "At 70% sparsity and above, models with information deletion recover some accuracy w.r.t. pruned models, so complexity restriction is a secondary cause of performance degradation. However, these models do not recover all evaluation accuracy, despite matching un-pruned model's training loss.", "Table TABREF27 shows that on the MNLI and QQP tasks, which have the largest amount of training data, information deletion performs much better than pruning. In contrast, models do not recover as well on SST-2 and CoLA, which have less data. We believe this is because the larger datasets require larger models to fit, so complexity restriction becomes an issue earlier.", "We might be concerned that poorly performing models are over-fitting, since they have lower training losses than unpruned models. But the best performing information-deleted models have the lowest training error of all, so overfitting seems unlikely." ], [ "We've seen that over-pruning BERT deletes information useful for downstream tasks. Is this information equally useful to all tasks? We might consider the pre-training loss as a proxy for how much pre-training information we've deleted in total. Similarly, the performance of information-deletion models is a proxy for how much of that information was useful for each task. Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy.", "For every bit of information we delete from BERT, it appears only a fraction is useful for CoLA, and an even smaller fraction useful for QQP. This relationship should be taken into account when considering the memory / accuracy trade-off of over-pruning. Pruning an extra 30% of BERT's weights is worth only one accuracy point on QQP but 10 points on CoLA. It's unclear, however, whether this is because the pre-training task is less relevant to QQP or whether QQP simply has a bigger dataset with more information content." ], [ "Since pre-training information deletion plays a central role in performance degradation while over-pruning, we might expect that downstream fine-tuning would improve prunability by making important weights more salient (increasing their magnitude). However, Figure FIGREF15 shows that models pruned after downstream fine-tuning do not surpass the development accuracies of models pruned during pre-training, despite achieving similar training losses. Figure FIGREF25 shows fine-tuning changes which weights are pruned by less than 6%.", "Why doesn't fine-tuning change which weights are pruned much? Table TABREF30 shows that the magnitude sorting order of weights is mostly preserved; weights move on average 0-4% away from their starting positions in the sort order. We also see that high magnitude weights are more stable than lower ones (Figure FIGREF31).", "Our experiments suggest that training on downstream data before pruning is too blunt an instrument to improve prunability. Even so, we might consider simply training on the downstream tasks for much longer, which would increase the difference in weights pruned. However, Figure FIGREF26 shows that even after an epoch of downstream fine-tuning, weights quickly re-stabilize in a new sorting order, meaning longer downstream training will have only a marginal effect on which weights are pruned. Indeed, Figure FIGREF25 shows that the weights selected for 60% pruning quickly stabilize and evaluation accuracy does not improve with more training before pruning." ], [ "Compressing BERT for Specific Tasks Section SECREF5 showed that downstream fine-tuning does not increase prunability. However, several alternative compression approaches have been proposed to discard non-task-specific information. BIBREF25 used an information bottleneck to discard non-syntactic information. BIBREF31 used BERT as a knowledge distillation teacher to compress relevant information into smaller Bi-LSTMs, while BIBREF32 took a similar distillation approach. While fine-tuning does not increase prunability, task-specific knowledge might be extracted from BERT with other methods.", "Attention Head Pruning BIBREF33 previously showed redundancy in transformer models by pruning entire attention heads. BIBREF34 showed that after fine-tuning on MNLI, up to 40% of attention heads can be pruned from BERT without affecting test accuracy. They show redundancy in BERT after fine-tuning on a single downstream task; in contrast, our work emphasizes the interplay between compression and transfer learning to many tasks, pruning both before and after fine-tuning. Also, magnitude weight pruning allows us to additionally prune the feed-foward networks and sub-word embeddings in BERT (not just self-attention), which account for $\\sim $72% of BERT's total memory usage.", "We suspect that attention head pruning and weight pruning remove different redundancies from BERT. Figure FIGREF26 shows that weight pruning does not prune any specific attention head much more than the pruning rate for the whole model. It is not clear, however, whether weight pruning and recovery training makes attention heads less prunable by distributing functionality to unused heads." ], [ "We've shown that encoding BERT's inductive bias requires many more weights than are required to fit downstream data. Future work on compressing pre-trained models should focus on maintaining that inductive bias and quantifying its relevance to various tasks during accuracy/memory trade-offs.", "For magnitude weight pruning, we've shown that 30-40% of the weights do not encode any useful inductive bias and can be discarded without affecting BERT's universality. The relevance of the rest of the weights vary from task to task, and fine-tuning on downstream tasks does not change the nature of this trade-off by changing which weights are pruned. In future work, we will investigate the factors that influence language modeling's relevance to downstream tasks and how to improve compression in a task-general way.", "It's reasonable to believe that these conclusions will generalize to other pre-trained language models such as Kermit BIBREF3, XLNet BIBREF2, GPT-2 BIBREF4, RoBERTa BIBREF35 or ELMO BIBREF36. All of these learn some variant of language modeling, and most use Transformer architectures. While it remains to be shown in future work, viewing pruning as architecture search implies these models will be prunable due to the training dynamics inherent to neural networks." ] ], "section_name": [ "Introduction", "Pruning: Compression, Regularization, Architecture Search", "Pruning: Compression, Regularization, Architecture Search ::: Magnitude Weight Pruning", "Experimental Setup", "Experimental Setup ::: Implementing BERT Pruning", "Experimental Setup ::: Pruning During Pre-Training", "Experimental Setup ::: Disentangling Complexity Restriction and Information Deletion", "Experimental Setup ::: Pruning After Downstream Fine-tuning", "Pruning Regimes ::: 30-40% of Weights Are Not Useful", "Pruning Regimes ::: Medium Pruning Levels Prevent Information Transfer", "Pruning Regimes ::: High Pruning Levels Also Prevent Fitting Downstream Datasets", "Pruning Regimes ::: How Much Is A Bit Of BERT Worth?", "Downstream Fine-tuning Does Not Improve Prunability", "Related Work", "Conclusion And Future Work" ] }
{ "answers": [ { "annotation_id": [ "458ca650f9fcfbab21f6524e5cd7cceb82103d7c" ], "answer": [ { "evidence": [ "Model compression BIBREF7, which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be used to trade accuracy for memory in some low-resource cases, such as deploying to smartphones for real-time prediction. The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible?", "To explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section.", "Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. This information is not equally useful to each task; tasks degrade linearly with pre-train loss, but at different rates. High levels of pruning, depending on the size of the downstream dataset, may additionally degrade performance by preventing models from fitting downstream datasets. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability or change the order of pruning by a meaningful amount." ], "extractive_spans": [ "we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. " ], "free_form_answer": "", "highlighted_evidence": [ "The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible?\n\nTo explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section.", "Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "d08049e419f2cc2e2e4bcb38fe702938087efeba" ], "answer": [ { "evidence": [ "We've seen that over-pruning BERT deletes information useful for downstream tasks. Is this information equally useful to all tasks? We might consider the pre-training loss as a proxy for how much pre-training information we've deleted in total. Similarly, the performance of information-deletion models is a proxy for how much of that information was useful for each task. Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy.", "FLOAT SELECTED: Figure 2: (Left) Pre-training loss predicts information deletion GLUE accuracy linearly as sparsity increases. We believe the slope of each line tells us how much a bit of BERT is worth to each task. (CoLA at 90% is excluded from the line of best fit.) (Right) The cosine similarities of features extracted for a subset of the pre-training development data before and after pruning. Features are extracted from activations of all 12 layers of BERT and compared layer-wise to a model that has not been pruned. As performance degrades, cosine similarities of features decreases." ], "extractive_spans": [], "free_form_answer": "The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0", "highlighted_evidence": [ "Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy.", "FLOAT SELECTED: Figure 2: (Left) Pre-training loss predicts information deletion GLUE accuracy linearly as sparsity increases. We believe the slope of each line tells us how much a bit of BERT is worth to each task. (CoLA at 90% is excluded from the line of best fit.) (Right) The cosine similarities of features extracted for a subset of the pre-training development data before and after pruning. Features are extracted from activations of all 12 layers of BERT and compared layer-wise to a model that has not been pruned. As performance degrades, cosine similarities of features decreases." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "zero", "zero" ], "paper_read": [ "no", "no" ], "question": [ "How they observe that fine-tuning BERT on a specific task does not improve its prunability?", "How much is pre-training loss increased in Low/Medium/Hard level of pruning?" ], "question_id": [ "dc4096b8bab0afcbbd4fbb015da2bea5d38251cd", "c4c9c7900a0480743acc7599efb359bc81cf3a4d" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "computer vision", "computer vision" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: (Blue) The best GLUE dev accuracy and training losses for models pruned during pretraining, averaged over 5 tasks. Also shown are models with information deletion during pre-training (orange), models pruned after downstream fine-tuning (green), and models pruned randomly during pre-training instead of by lowest magnitude (red). 30-40% of weights can be pruned using magnitude weight pruning without decreasing dowsntream accuracy. Notice that information deletion fits the training data better than un-pruned models at all sparsity levels but does not fully recover evaluation accuracy. Also, models pruned after downstream fine-tuning have the same or worse development accuracy, despite achieving lower training losses. Note: none of the pruned models are overfitting because un-pruned models have the lowest training loss and the highest development accuracy. While the results for individual tasks are in Table 1, each task does not vary much from the average trend, with an exception discussed in Section 4.3.", "Figure 2: (Left) Pre-training loss predicts information deletion GLUE accuracy linearly as sparsity increases. We believe the slope of each line tells us how much a bit of BERT is worth to each task. (CoLA at 90% is excluded from the line of best fit.) (Right) The cosine similarities of features extracted for a subset of the pre-training development data before and after pruning. Features are extracted from activations of all 12 layers of BERT and compared layer-wise to a model that has not been pruned. As performance degrades, cosine similarities of features decreases.", "Figure 3: (Left) The measured difference in pruning masks between models pruned during pretraining and models pruned during downstream fine-tuning. As predicted, the differences are less than 6%, since fine-tuning only changes the magnitude sorting order of weights locally, not globally. (Right) The average GLUE development accuracy and pruning mask difference for models trained on downstream datasets before pruning 60% at learning rate 5e-5. After pruning, models are trained for an additional 2 epochs to regain accuracy. We see that training between 3 and 12 epochs before pruning does not change which weights are pruned or improve performance.", "Figure 4: (Left) The average, min, and max percentage of individual attention heads pruned at each sparsity level. We see at 60% sparsity, each attention head individually is pruned strictly between 55% and 65%. (Right) We compute the magnitude sorting order of each weight before and after downstream fine-tuning. If a weight’s original position is 59 / 100 before fine-tuning and 63 / 100 after fine-tuning, then that weight moved 4% in the sorting order. After even an epoch of downstream fine-tuning, weights quickly stabilize in a new sorting order which is not far from the original sorting order. Variances level out similarly.", "Table 1: Pre-training development losses and GLUE task development accuracies for various levels of pruning. Each development accuracy is accompanied on its right by the achieved training loss, evaluated on the entire training set. Averages are summarized in Figure 1. Pre-training losses are omitted for models pruned after downstream fine-tuning because it is not clear how to measure their performance on the pre-training task in a fair way.", "Figure 5: The sum of weights pruned at each sparsity level for one shot pruning of BERT. Given the motivation for our saliency criterion, it seems strange that such a large magnitude of weights can be pruned without decreasing accuracy.", "Table 2: We compute the magnitude sorting order of each weight before and after downstream finetuning. If a weight’s original position is 59 / 100 before fine-tuning and 63 / 100 after fine-tuning, then that weight moved 4% in the sorting order. We then list the average movement of weights in each model, along with the standard deviation. Sorting order changes mostly locally across tasks: a weight moves, on average, 0-4% away from its starting position. As expected, larger datasets and larger learning rates have more movement (per epoch). We also see that higher magnitude weights are more stable than lower weights, see Figure 7.", "Figure 7: We show how weight sort order movements are distributed during fine-tuning, given a weight’s starting magnitude. We see that higher magnitude weights are more stable than lower magnitude weights and do not move as much in the sort order. This plot is nearly identical for every model and learning rate, so we only show it once.", "Figure 8: A heatmap of the weight magnitudes of the 12 horizontally stacked self-attention key projection matrices for layer 1. A banding pattern can be seen: the highest values of the matrix tend to cluster in certain attention heads. This pattern appears in most of the self-attention parameter matrices, but it does not cause pruning to prune one head more than another. However, it may prove to be a useful heuristic for attention head pruning, which would not require making many passes over the training data.", "Figure 9: A heatmap of the weight magnitudes of BERT’s subword embeddings. Interestingly, pruning BERT embeddings are more interpretable; we can see shorter subwords (top rows) have smaller magnitude values and thus will be pruned earlier than other subword embeddings.", "Table 3: The values of BERT’s weights are normally distributed in each weight matrix. The means and variances are listed for each." ], "file": [ "5-Figure1-1.png", "5-Figure2-1.png", "7-Figure3-1.png", "8-Figure4-1.png", "12-Table1-1.png", "13-Figure5-1.png", "13-Table2-1.png", "13-Figure7-1.png", "14-Figure8-1.png", "14-Figure9-1.png", "16-Table3-1.png" ] }
[ "How much is pre-training loss increased in Low/Medium/Hard level of pruning?" ]
[ [ "2002.08307-Pruning Regimes ::: How Much Is A Bit Of BERT Worth?-0", "2002.08307-5-Figure2-1.png" ] ]
[ "The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0" ]
523
1707.08559
Video Highlight Prediction Using Audience Chat Reactions
Sports channel video portals offer an exciting domain for research on multimodal, multilingual analysis. We present methods addressing the problem of automatic video highlight prediction based on joint visual features and textual analysis of the real-world audience discourse with complex slang, in both English and traditional Chinese. We present a novel dataset based on League of Legends championships recorded from North American and Taiwanese Twitch.tv channels (will be released for further research), and demonstrate strong results on these using multimodal, character-level CNN-RNN model architectures.
{ "paragraphs": [ [ "On-line eSports events provide a new setting for observing large-scale social interaction focused on a visual story that evolves over time—a video game. While watching sporting competitions has been a major source of entertainment for millennia, and is a significant part of today's culture, eSports brings this to a new level on several fronts. One is the global reach, the same games are played around the world and across cultures by speakers of several languages. Another is the scale of on-line text-based discourse during matches that is public and amendable to analysis. One of the most popular games, League of Legends, drew 43 million views for the 2016 world series final matches (broadcast in 18 languages) and a peak concurrent viewership of 14.7 million. Finally, players interact through what they see on screen while fans (and researchers) can see exactly the same views.", "This paper builds on the wealth of interaction around eSports to develop predictive models for match video highlights based on the audience's online chat discourse as well as the visual recordings of matches themselves. ESports journalists and fans create highlight videos of important moments in matches. Using these as ground truth, we explore automatic prediction of highlights via multimodal CNN+RNN models for multiple languages. Appealingly this task is natural, as the community already produces the ground truth and is global, allowing multilingual multimodal grounding.", "Highlight prediction is about capturing the exciting moments in a specific video (a game match in this case), and depends on the context, the state of play, and the players. This task of predicting the exciting moments is hence different from summarizing the entire match into a story summary. Hence, highlight prediction can benefit from the available real-time text commentary from fans, which is valuable in exposing more abstract background context, that may not be accessible with computer vision techniques that can easily identify some aspects of the state of play. As an example, computer vision may not understand why Michael Jordan's dunk is a highlight over that of another player, but concurrent fan commentary might reveal this.", "We collect our dataset from Twitch.tv, one of the live-streaming platforms that integrates comments (see Fig. FIGREF2 ), and the largest live-streaming platform for video games. We record matches of the game League of Legends (LOL), one of the largest eSports game in two subsets, 1) the spring season of the North American League of Legends Championship Series (NALCS), and 2) the League of Legends Master Series (LMS) hosted in Taiwan/Macau/HongKong, with chat comments in English and traditional Chinese respectively. We use the community created highlights to label each frame of a match as highlight or not.", "In addition to our new dataset, we present several experiments with multilingual character-based models, deep-learning based vision models either per-frame or tied together with a video-sequence LSTM-RNN, and combinations of language and vision models. Our results indicate that while surprisingly the visual models generally outperform language-based models, we can still build reasonably useful language models that help disambiguate difficult cases for vision models, and that combining the two sources is the most effective model (across multiple languages)." ], [ "We briefly discuss a small sample of the related work on language and vision datasets, summarization, and highlight prediction. There has been a surge of vision and language datasets focusing on captions over the last few years, BIBREF0 , BIBREF1 , BIBREF2 , followed by efforts to focus on more specific parts of images BIBREF3 , or referring expressions BIBREF4 , or on the broader context BIBREF5 . For video, similar efforts have collected descriptions BIBREF6 , while others use existing descriptive video service (DVS) sources BIBREF7 , BIBREF8 . Beyond descriptions, other datasets use questions to relate images and language BIBREF9 , BIBREF10 . This approach is extended to movies in MovieQA.", "The related problem of visually summarizing videos (as opposed to finding the highlights) has produced datasets of holiday and sports events with multiple users making summary videos BIBREF11 and multiple users selecting summary key-frames BIBREF12 from short videos. For language-based summarization, Extractive models BIBREF13 , BIBREF14 generate summaries by selecting important sentences and then assembling these, while Abstractive models BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 generate/rewrite the summaries from scratch.", "Closer to our setting, there has been work on highlight prediction in football (soccer) and basketball based on audio of broadcasts BIBREF19 BIBREF20 where commentators may have an outsized impact or visual features BIBREF21 . In the spirit of our study, there has been work looking at tweets during sporting events BIBREF22 , but the tweets are not as immediate or as well aligned with the games as the eSports comments. More closely related to our work, yahooesports collects videos for Heroes of the Storm, League of Legends, and Dota2 on online broadcasting websites of around 327 hours total. They also provide highlight labeling annotated by four annotators. Our method, on the other hand, has a similar scale of data, but we use existing highlights, and we also employ textual audience chat commentary, thus providing a new resource and task for Language and Vision research. In summary, we present the first language-vision dataset for video highlighting that contains audience reactions in chat format, in multiple languages. The community produced ground truth provides labels for each frame and can be used for supervised learning. The language side of this new dataset presents interesting challenges related to real-world Internet-style slang." ], [ "Our dataset covers 218 videos from NALCS and 103 from LMS for a total of 321 videos from week 1 to week 9 in 2017 spring series from each tournament. Each week there are 10 matches for NALCS and 6 matches for LMS. Matches are best of 3, so consist of two games or three games. The first and third games are used for training. The second games in the first 4 weeks are used as validation and the remainder of second games are used as test. Table TABREF3 lists the numbers of videos in train, validation, and test subsets.", "Each game's video ranges from 30 to 50 minutes in length which contains image and chat data linked to the specific timestamp of the game. The average number of chats per video is 7490 with a standard deviation of 4922. The high value of standard deviation is mostly due to the fact that NALCS simultaneously broadcasts matches in two different channels (nalcs1 and nalcs2) which often leads to the majority of users watching the channel with a relatively more popular team causing an imbalance in the number of chats. If we only consider LMS which broadcasts with a single channel, the average number of chats are 7210 with standard deviation of 2719. The number of viewers for each game averages about 21526, and the number of unique users who type in chat is on average 2185, i.e., roughly 10% of the viewers." ], [ "In this section, we explain the proposed models and components. We first describe the notation and definition of the problem, plus the evaluation metric used. Next, we explain our vision model V-CNN-LSTM and language model L-Char-LSTM. Finally, we describe the joint multimodal model INLINEFORM0 -LSTM." ], [ "We presented a new dataset and multimodal methods for highlight prediction, based on visual cues and textual audience chat reactions in multiple languages. We hope our new dataset can encourage further multilingual, multimodal research." ], [ "We thank Tamara Berg, Phil Ammirato, and the reviewers for their helpful suggestions, and we acknowledge support from NSF 1533771." ] ], "section_name": [ "Introduction", "Related Work", "Data Collection", "Model", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "66530989d292be1a7585169dd36fcae82e7cd385" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "d4633a5609ee500a11f18e8f29932e99f5ac42e4" ], "answer": [ { "evidence": [ "Each game's video ranges from 30 to 50 minutes in length which contains image and chat data linked to the specific timestamp of the game. The average number of chats per video is 7490 with a standard deviation of 4922. The high value of standard deviation is mostly due to the fact that NALCS simultaneously broadcasts matches in two different channels (nalcs1 and nalcs2) which often leads to the majority of users watching the channel with a relatively more popular team causing an imbalance in the number of chats. If we only consider LMS which broadcasts with a single channel, the average number of chats are 7210 with standard deviation of 2719. The number of viewers for each game averages about 21526, and the number of unique users who type in chat is on average 2185, i.e., roughly 10% of the viewers." ], "extractive_spans": [], "free_form_answer": "40 minutes", "highlighted_evidence": [ "Each game's video ranges from 30 to 50 minutes in length which contains image and chat data linked to the specific timestamp of the game." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "c5dee03918398f5c6cc92efd15f4b4cc70dde312" ], "answer": [ { "evidence": [ "Our dataset covers 218 videos from NALCS and 103 from LMS for a total of 321 videos from week 1 to week 9 in 2017 spring series from each tournament. Each week there are 10 matches for NALCS and 6 matches for LMS. Matches are best of 3, so consist of two games or three games. The first and third games are used for training. The second games in the first 4 weeks are used as validation and the remainder of second games are used as test. Table TABREF3 lists the numbers of videos in train, validation, and test subsets." ], "extractive_spans": [ "321 videos" ], "free_form_answer": "", "highlighted_evidence": [ "Our dataset covers 218 videos from NALCS and 103 from LMS for a total of 321 videos from week 1 to week 9 in 2017 spring series from each tournament. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "45cda4083447220b165a7cb5da732a38800af3af" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Test Results on the NALCS (English) and LMS (Traditional Chinese) datasets." ], "extractive_spans": [], "free_form_answer": "Best model achieved F-score 74.7 on NALCS and F-score of 70.0 on LMS on test set", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Test Results on the NALCS (English) and LMS (Traditional Chinese) datasets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "What was the baseline?", "What is the average length of the recordings?", "How big was the dataset presented?", "What were their results?" ], "question_id": [ "762f2527f85c3ae6bbdf1f331311930ef1e1fa51", "e414d819f10c443cbefa8bdb9bd486ffc6d1fc6a", "2e73006e5d007aa08c62030a4d5a7e2e7e0eaf6c", "1a8b7d3d126935c09306cacca7ddb4b953ef68ab" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Figure 1: Pictures of Broadcasting platforms:(a) Twitch: League of Legends Tournament Broadcasting, (b) Youtube: News Channel, (c)Facebook: Personal live sharing", "Figure 2: Highlight Labeling: (a) The feature representation of each frame is calculated by averaging each color channel in each subregion. (b) After template matching, the top bar shows the maximum of similarity matching of each frame in the highlight and the bottom bar is the labeling result of the video.", "Table 1: Dataset statistics (number of videos).", "Figure 3: Network architecture of proposed models.", "Table 2: Ablation Study: Effects of various models. C:Chat, V:Video, UF: % of frames Used in highlight clips as positive training examples; P: Precision, R: Recall, F: F-score.", "Table 3: Test Results on the NALCS (English) and LMS (Traditional Chinese) datasets." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "3-Table1-1.png", "4-Figure3-1.png", "5-Table2-1.png", "5-Table3-1.png" ] }
[ "What is the average length of the recordings?", "What were their results?" ]
[ [ "1707.08559-Data Collection-1" ], [ "1707.08559-5-Table3-1.png" ] ]
[ "40 minutes", "Best model achieved F-score 74.7 on NALCS and F-score of 70.0 on LMS on test set" ]
525
1912.10806
DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News
Stock price prediction is important for value investments in the stock market. In particular, short-term prediction that exploits financial news articles is promising in recent years. In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism. First, based on the autoregressive moving average model (ARMA), a sentiment-ARMA is formulated by taking into consideration the information of financial news articles in the model. Then, an LSTM-based deep neural network is designed, which consists of three components: LSTM, VADER model and differential privacy (DP) mechanism. The proposed DP-LSTM scheme can reduce prediction errors and increase the robustness. Extensive experiments on S&P 500 stocks show that (i) the proposed DP-LSTM achieves 0.32% improvement in mean MPA of prediction result, and (ii) for the prediction of the market index S&P 500, we achieve up to 65.79% improvement in MSE.
{ "paragraphs": [ [ "Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis.", "Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4].", "To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market.", "We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is.", "However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row.", "In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is \"roughly as likely\" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time.", "The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper." ], [ "In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function." ], [ "The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\\text{A}$ be the variable based on ARMA at time $t$, then we have", "where $X_{t-i}$ denotes the past value at time $t-i$; $\\epsilon _{t}$ denotes the random error at time $t$; $\\phi _i$ and $\\psi _j$ are the coefficients; $\\mu $ is a constant; $p$ and $q$ are integers that are often referred to as autoregressive and moving average polynomials, respectively." ], [ "Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20].", "Figure FIGREF3 shows an example of the sentiment analysis results obtained from financial news titles that were based on VADER. VADER uses a combination of a sentiment lexicon which are generally labelled according to their semantic orientation as either negative or positive. VADER has been found to be quite successful when dealing with news reviews. It is fully open-sourced under the MIT License. The result of VADER represent as sentiment scores, which include the positive, negative and neutral scores represent the proportion of text that falls in these categories. This means all these three scores should add up to 1. Besides, the Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). Figure FIGREF5 shows the positive and negative wordcloud, which is an intuitive analysis of the number of words in the news titles." ], [ "To take the sentiment analysis results of the financial news into account, we introduce the sentiment-ARMA model as follows", "where $\\alpha $ and $\\lambda $ are weighting factors; $c$ is a constant; and $f_2(\\cdot )$ is similar to $f_1(\\cdot )$ in the ARMA model (DISPLAY_FORM2) and is used to describe the prediction problem.", "In this paper, the LSTM neural network is used to predict the stock price, the input data is the previous stock price and the sentiment analysis results. Hence, the sentiment based LSTM neural network (named sentiment-LSTM) is aimed to minimize the following loss function:", "where $T$ denotes the number of prediction time slots, i.e., $t = 1,...,p$ are the observations (training input data), $t = p+1,...,p+T$ are the predicts (training output data); and $\\hat{X}_t$ is given in (DISPLAY_FORM7)." ], [ "Denote $\\mathcal {X}_t^{\\text{train}} = \\lbrace X_{t-i},S_{t-i}\\rbrace _{i=1}^p$ as the training input data. Figure FIGREF10 shows the LSTM's structure network, which comprises one or more hidden layers, an output layer and an input layer [16]. LSTM networks' main advantage is that the hidden layer comprises memory cells. Each memory cell recurrently has a core self-connected linear unit called “ Constant Error Carousel (CEC)” [13], which provides short-term memory storage and has three gates:", "Input gate, which controls the information from a new input to the memory cell, is given by", "where $h_{t-1}$ is the hidden state at the time step $t-1$; $i_t$ is the output of the input gate layer at the time step $t$; $\\hat{c}_t$ is the candidate value to be added to the output at the time step $t$; $b_i$ and $b_c$ are biases of the input gate layer and the candidate value computation, respectively; $W_i$ and $W_c$ are weights of the input gate and the candidate value computation, respectively; and $\\sigma (x) = 1/(1+e^{-x})$ is the pointwise nonlinear activation function.", "Forget gate, which controls the limit up to which a value is saved in the memory, is given by", "where $f_t$ is the forget state at the time step $t$, $W_f$ is the weight of the forget gate; and $b_f$ is the bias of the forget gate.", "Output gate, which controls the information output from the memory cell, is given by", "where new cell states $c_t$ are calculated based on the results of the previous two steps; $o_t$ is the output at the time step $t$; $W_o$ is the weight of the output gate; and $b_o$ is the bias of the output gate [14]." ], [ "Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\\mathcal {N}$ as input, and outputs a random variable ${f}(\\mathcal {N})$. For example, suppose $\\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7].", "Although differential privacy was originally developed to facilitate secure analysis over sensitive data, it can also enhance the robustness of the data. Note that finance data, especially news data and stock data, is unstable with a lot of noise, with a more robust data the accuracy of prediction will be improved. Since we predict stock price by fusing news come from different sources, which might include fake news. Involving differential privacy in the training to improve the robustness of the finance news is meaningful." ], [ "It is known that it is risky to predict stocks by considering news factors, because news can't guarantee full notarization and objectivity, many times extreme news will have a big impact on prediction models. To solve this problem, we consider entering the idea of the differential privacy when training. In this section, our DP-LSTM deep neural network training strategy is presented. The input data consists of three components: stock price, sentiment analysis compound score and noise." ], [ "The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores.", "The stocks with missing data are deleted, and the dataset we used eventually contains 451 stocks and 4 news resources (CNBC.com, Reuters.com, WSJ.comFortune.com.). Each stock records the adjust close price and news compound scores of 121 trading days.", "A rolling window with size 10 is used to separate data, that is, We predict the stock price of the next trading day based on historical data from the previous 10 days, hence resulting in a point-by-point prediction [15]. In particular, the training window is initialized with all real training data. Then we shift the window and add the next real point to the last point of training window to predict the next point and so forth. Then, according to the length of the window, the training data is divided into 92 sets of training input data (each set length 10) and training output data (each set length 1). The testing data is divided into input and output data of 9 windows (see Figure FIGREF20)." ], [ "To detect stock price pattern, it is necessary to normalize the stock price data. Since the LSTM neural network requires the stock patterns during training, we use “min-max” normalization method to reform dataset, which keeps the pattern of the data [11], as follow:", "where $X_{t}^{n}$ denotes the data after normalization. Accordingly, de-normalization is required at the end of the prediction process to get the original price, which is given by", "where $\\hat{X}_{t}^{n}$ denotes the predicted data and $\\hat{X}_{t}$ denotes the predicted data after de-normalization.", "Note that compound score is not normalized, since the compound score range from -1 to 1, which means all the compound score data has the same scale, so it is not require the normalization processing." ], [ "We consider the differential privacy as a method to improve the robustness of the LSTM predictions [8]. We explore the interplay between machine learning and differential privacy, and found that differential privacy has several properties that make it particularly useful in application such as robustness to extract textual information [9]. The robustness of textual information means that accuracy is guaranteed to be unaffected by certain false information [10].", "The input data of the model has 5 dimensions, which are the stock price and four compound scores as $(X^t, S_1^t, S_2^t, S_3^t, S_4^t), t=1,...,T$, where $X^t$ represents the stock price and $S_i^t,~i=1,...,4$ respectively denote the mean compound score calculated from WSJ, CNBC, Fortune and Reuters. According to the process of differential privacy, we add Gaussian noise with different variances to the news according to the variance of the news, i.e., the news compound score after adding noise is given by", "where $\\text{var}(\\cdot )$ is the variance operator, $\\lambda $ is a weighting factor and $\\mathcal {N}(\\cdot )$ denotes the random Gaussian process with zero mean and variance $\\lambda \\text{var}(S_i)$.", "We used python to crawl the news from the four sources of each trading day, perform sentiment analysis on the title of the news, and get the compound score. After splitting the data into training sets and test sets, we separately add noise to each of four news sources of the training set, then, for $n$-th stock, four sets of noise-added data $(X^n_t, {\\widetilde{S}^t_1}, S^t_2, S^t_3, S^t_4)$, $(X^n_t, {S^t_1}, \\widetilde{S}^t_2, S^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, \\widetilde{S}^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, S^t_3, \\widetilde{S}^t_4)$ are combined into a new training data through a rolling window. The stock price is then combined with the new compound score training data as input data for our DP-LSTM neural network." ], [ "The LSTM model in figure FIGREF10 has six layers, followed by an LSTM layer, a dropout layer, an LSTM layer, an LSTM layer, a dropout layer, a dense layer, respectively. The dropout layers (with dropout rate 0.2) prevent the network from overfitting. The dense layer is used to reshape the output. Since a network will be difficult to train if it contains a large number of LSTM layers [16], we use three LSTM layers here.", "In each LSTM layer, the loss function is the mean square error (MSE), which is the sum of the squared distances between our target variable and the predicted value. In addition, the ADAM [17] is used as optimizer, since it is straightforward to implement, computationally efficient and well suited for problems with large data set and parameters.", "There are many methods and algorithms to implement sentiment analysis systems. In this paper, we use rule-based systems that perform sentiment analysis based on a set of manually crafted rules. Usually, rule-based approaches define a set of rules in some kind of scripting language that identify subjectivity, polarity, or the subject of an opinion. We use VADER, a simple rule-based model for general sentiment analysis." ], [ "In this section, we validate our DP-LSTM based on the S&P 500 stocks. We calculate the mean prediction accuracy (MPA) to evaluate the proposed methods, which is defined as", "where $X_{t,\\ell }$ is the real stock price of the $\\ell $-th stock on the $t$-th day, $L$ is the number of stocks and $\\hat{X}_{t,\\ell }$ is the corresponding prediction result.", "Figure FIGREF27 plots the average score for all news on the same day over the period. The compound score is fluctuating between -0.3 and 0.15, indicating an overall neutral to slightly negative sentiment. The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1 (most extreme negative) and +1 (most extreme positive).", "Figure FIGREF29 shows the $\\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness.", "Note that the results are obtained by running many trials, since we train stocks separately and predict each price individually due to the different patterns and scales of stock prices. This in total adds up to 451 runs. The results shown in Table TABREF30 is the average of these 451 runs. Furthermore, we provide results for 9 duration over a period in Figure FIGREF29. The performance of our DP-LSTM is always better than the LSTM with news. Based on the sentiment-ARMA model and adding noise for training, the proposed DP-LSTM is more robust. The investment risk based on this prediction results is reduced.", "In Figure FIGREF31, we can see the prediction results of DP-LSTM with is closer to the real S&P 500 index price line than other methods. The two lines (prediction results of LSTM with news and LSTM without news) almost coincide in Figure FIGREF31. We can tell the subtle differences from the Table TABREF32, that DP-LSTM is far ahead, and LSTM with news is slightly better than LSTM without news." ], [ "In this paper, we integrated the deep neural network with the famous NLP models (VADER) to identify and extract opinions within a given text, combining the stock adjust close price and compound score to reduce the investment risk. We first proposed a sentiment-ARMA model to represent the stock price, which incorporates influential variables (price and news) based on the ARMA model. Then, a DP-LSTM deep neural network was proposed to predict stock price according to the sentiment-ARMA model, which combines the LSTM, compound score of news articles and differential privacy method. News are not all objective. If we rely on the information extracted from the news for prediction fully, we may increase bias because of some non-objective reports. Therefore, the DP-LSTM enhance robustness of the prediction model. Experiment results based on the S&P 500 stocks show that the proposed DP-LSTM network can predict the stock price accurately with robust performance, especially for S&P 500 index that reflects the general trend of the market. S&P 500 prediction results show that the differential privacy method can significantly improve the robustness and accuracy." ], [ "[1] X. Li, Y. Li, X.-Y. Liu, D. Wang, “Risk management via anomaly circumvent: mnemonic deep learning for midterm stock prediction.” in Proceedings of 2nd KDD Workshop on Anomaly Detection in Finance (Anchorage ’19), 2019.", "[2] P. Chang, C. Fan, and C. Liu, “Integrating a piece-wise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, 1 (2009), 80–92.", "[3] Akita, Ryo, et al. “Deep learning for stock prediction using numerical and textual information.” IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS). IEEE, 2016.", "[4] Li, Xiaodong, et al. “Does summarization help stock prediction? A news impact analysis.” IEEE Intelligent Systems 30.3 (2015): 26-34.", "[5] Ding, Xiao, et al. “Deep learning for event-driven stock prediction.” Twenty-fourth International Joint Conference on Artificial Intelligence. 2015.", "[6] Hutto, Clayton J., and Eric Gilbert. “Vader: A parsimonious rule-based model for sentiment analysis of social media text.” Eighth International AAAI Conference on Weblogs and Social Media, 2014.", "[7] Ji, Zhanglong, Zachary C. Lipton, and Charles Elkan. “Differential privacy and machine learning: a survey and review.” arXiv preprint arXiv:1412.7584 (2014).", "[8] Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016.", "[9] McMahan, H. Brendan, and Galen Andrew. “A general approach to adding differential privacy to iterative training procedures.” arXiv preprint arXiv:1812.06210 (2018).", "[10] Lecuyer, Mathias, et al. “Certified robustness to adversarial examples with differential privacy.” arXiv preprint arXiv:1802.03471 (2018).", "[11] Hafezi, Reza, Jamal Shahrabi, and Esmaeil Hadavandi. “A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price.” Applied Soft Computing, 29 (2015): 196-210.", "[12] Chang, Pei-Chann, Chin-Yuan Fan, and Chen-Hao Liu. “Integrating a piecewise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39.1 (2008): 80-92.", "[13] Gers, Felix A., Nicol N. Schraudolph, and Jürgen Schmidhuber. “Learning precise timing with LSTM recurrent networks.” Journal of Machine Learning Research 3.Aug (2002): 115-143.", "[14] Qin, Yao, et al. “A dual-stage attention-based recurrent neural network for time series prediction.” arXiv preprint arXiv:1704.02971 (2017).", "[15] Malhotra, Pankaj, et al. “Long short term memory networks for anomaly detection in time series.” Proceedings. Presses universitaires de Louvain, 2015.", "[16] Sak, Haşim, Andrew Senior, and Françoise Beaufays. “Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” Fifteenth annual conference of the international speech communication association, 2014.", "[17] Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014).", "[18] Box, George EP, et al. Time series analysis: forecasting and control. John Wiley & Sons, 2015.", "[19] Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and Trends in Information Retrieval 2.1–2 (2008): 1-135.", "[20] Cambria, Erik. “Affective computing and sentiment analysis.” IEEE Intelligent Systems 31.2 (2016): 102-107.", "[21] Dwork C, Lei J. Differential privacy and robust statistics//STOC. 2009, 9: 371-380.", "[22] X. Li, Y. Li, Y. Zhan, and X.-Y. Liu. “Optimistic bull or pessimistic bear: adaptive deep reinforcement learning for stock portfolio allocation.” in Proceedings of the 36th International Conference on Machine Learning, 2019." ] ], "section_name": [ "Introduction", "Problem Statement", "Problem Statement ::: ARMA Model", "Problem Statement ::: Sentiment Analysis", "Problem Statement ::: Sentiment-ARMA Model and Loss Function", "Problem Statement ::: Overview of LSTM", "Problem Statement ::: Definition of Differential Privacy", "Training DP-LSTM Neural Network", "Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing", "Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Normalization", "Training DP-LSTM Neural Network ::: Adding Noise", "Training DP-LSTM Neural Network ::: Training Setting", "Performance Evaluation", "Conclusion", "References" ] }
{ "answers": [ { "annotation_id": [ "ec5a4453d309c8ff8acdb8ce53c33fc8450e31d3" ], "answer": [ { "evidence": [ "Figure FIGREF29 shows the $\\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Figure FIGREF29 shows the $\\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. " ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "464e21a41aa0b0d3829ae7d042dfe43a45f7112d" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Predicted Mean MPA results.", "FLOAT SELECTED: Table 2: S&P 500 predicted results." ], "extractive_spans": [], "free_form_answer": "mean prediction accuracy 0.99582651\nS&P 500 Accuracy 0.99582651", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Predicted Mean MPA results.", "FLOAT SELECTED: Table 2: S&P 500 predicted results." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "bb288c0a79c3e79eb757f5893e52921449dd3c2a" ], "answer": [ { "evidence": [ "The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores." ], "extractive_spans": [], "free_form_answer": "historical S&P 500 component stocks\n 306242 news articles", "highlighted_evidence": [ "The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data.", "Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "6143c58be89dd7566e30900ecab79521aa148498" ], "answer": [ { "evidence": [ "However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row.", "Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\\mathcal {N}$ as input, and outputs a random variable ${f}(\\mathcal {N})$. For example, suppose $\\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]." ], "extractive_spans": [ "A mechanism ${f}$ is a random function that takes a dataset $\\mathcal {N}$ as input, and outputs a random variable ${f}(\\mathcal {N})$." ], "free_form_answer": "", "highlighted_evidence": [ "Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row.", " It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\\mathcal {N}$ as input, and outputs a random variable ${f}(\\mathcal {N})$. For example, suppose $\\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Is the model compared against a linear regression baseline?", "What is the prediction accuracy of the model?", "What is the dataset used in the paper?", "How does the differential privacy mechanism work?" ], "question_id": [ "5c0b8c1b649df1b07d9af3aa9154ac340ec8b81c", "2e1ededb7c8460169cf3c38e6cde6de402c1e720", "3b391cd58cf6a61fe8c8eff2095e33794e80f0e3", "4d8ca3f7aa65dcb42eba72acf3584d37b416b19c" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "stock market", "stock market", "stock market", "stock market" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: NLTK processing. For preprocessing, each news title will be tokenized into individual words. Then applying SentimentIntensityAnalyzer from NLTK vadar to calculate the polarity score.", "Figure 2: Positive wordcloud (left) and negative wordcloud (right). We divide the news based on their compound score. For both positive news and negative news, we count all the words and rank them to create the wordcloud. The larger the word, more frequently it has appeared in the source.", "Figure 3: LSTM procedure", "Figure 4: Schematic diagram of rolling window.", "Figure 5: NLTK result.", "Figure 6: Mean prediction accuracies of the DP-LSTM and vanilla LSTM.", "Table 1: Predicted Mean MPA results.", "Table 2: S&P 500 predicted results.", "Figure 7: Prediction result of LSTM based on price." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Figure4-1.png", "7-Figure5-1.png", "7-Figure6-1.png", "7-Table1-1.png", "8-Table2-1.png", "8-Figure7-1.png" ] }
[ "What is the prediction accuracy of the model?", "What is the dataset used in the paper?" ]
[ [ "1912.10806-7-Table1-1.png", "1912.10806-8-Table2-1.png" ], [ "1912.10806-Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing-0" ] ]
[ "mean prediction accuracy 0.99582651\nS&P 500 Accuracy 0.99582651", "historical S&P 500 component stocks\n 306242 news articles" ]
528
1904.09708
Compositional generalization in a deep seq2seq model by separating syntax and semantics
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. Inspired by work in neuroscience suggesting separate brain systems for syntactic and semantic processing, we implement a modification to standard approaches in neural machine translation, imposing an analogous separation. The novel model, which we call Syntactic Attention, substantially outperforms standard methods in deep learning on the SCAN dataset, a compositional generalization task, without any hand-engineered features or additional supervision. Our work suggests that separating syntactic from semantic learning may be a useful heuristic for capturing compositional structure.
{ "paragraphs": [ [ "A crucial property underlying the expressive power of human language is its systematicity BIBREF0 , BIBREF1 : syntactic or grammatical rules allow arbitrary elements to be combined in novel ways, making the number of sentences possible in a language to be exponential in the number of its basic elements. Recent work has shown that standard deep learning methods in natural language processing fail to capture this important property: when tested on unseen combinations of known elements, state-of-the-art models fail to generalize BIBREF2 , BIBREF3 , BIBREF4 . It has been suggested that this failure represents a major deficiency of current deep learning models, especially when they are compared to human learners BIBREF5 , BIBREF0 .", "A recently published dataset called SCAN BIBREF2 (Simplified version of the CommAI Navigation tasks), tests compositional generalization in a sequence-to-sequence (seq2seq) setting by systematically holding out of the training set all inputs containing a basic primitive verb (\"jump\"), and testing on sequences containing that verb. Success on this difficult problem requires models to generalize knowledge gained about the other primitive verbs (\"walk\", \"run\" and \"look\") to the novel verb \"jump,\" without having seen \"jump\" in any but the most basic context (\"jump\" $\\rightarrow $ JUMP). It is trivial for human learners to generalize in this way (e.g. if I tell you that \"dax\" is a verb, you can generalize its usage to all kinds of constructions, like \"dax twice and then dax again\", without even knowing what the word means) BIBREF2 . However, standard recurrent seq2seq models fail miserably on this task, with the best-reported model (a gated recurrent unit augmented with an attention mechanism) achieving only 12.5% accuracy on the test set BIBREF2 , BIBREF4 . Recently, convolutional neural networks (CNN) were shown to perform better on this test, but still only achieved 69.2% accuracy on the test set.", "From a statistical-learning perspective, this failure is quite natural. The neural networks trained on the SCAN task fail to generalize because they have memorized biases that do indeed exist in the training set. Because \"jump\" has never been seen with any adverb, it would not be irrational to assume that \"jump twice\" is an invalid sentence in this language. The SCAN task requires networks to make an inferential leap about the entire structure of part of the distribution that they have not seen - that is, it requires them to make an out-of-domain (o.o.d.) extrapolation BIBREF5 , rather than merely interpolate according to the assumption that train and test data are independent and identically distributed (i.i.d.) (see Figure 1 ). Seen another way, the SCAN task and its analogues in human learning (e.g. \"dax\"), require models not to learn some of the correlations that are actually present in the training data BIBREF6 .", "Given that humans can perform well on certain kinds of o.o.d. extrapolation tasks, the human brain must be implementing principles that allow humans to generalize systematically, but which are lacking in current deep learning models. One prominent idea from neuroscience research on language processing that may offer such a principle is that the brain contains partially separate systems for processing syntax and semantics. In this paper, we motivate such a separation from a machine-learning perspective, and test a simple implementation on the SCAN dataset. Our novel model, which we call Syntactic Attention, encodes syntactic and semantic information in separate streams before producing output sequences. Our experiments show that our novel architecture achieves substantially improved compositional generalization performance over other recurrent networks on the SCAN dataset." ], [ "Syntax is the aspect of language underlying its systematicity BIBREF1 . When given a novel verb like \"dax,\" humans can generalize its usage to many different constructions that they have never seen before, by applying known syntactic or grammatical rules about verbs (e.g. rules about how to conjugate to a different tense or about how adverbs modify verbs). It has long been thought that humans possess specialized cognitive machinery for learning the syntactic or grammatical structure of language BIBREF7 . A part of the prefrontal cortex called Broca's area, originally thought only to be involved in language production, was later found to be important for comprehending syntactically complex sentences, leading some to conclude that it is important for syntactic processing in general BIBREF8 , BIBREF9 . For example, patients with lesions to this area showed poor comprehension on sentences such as \"The girl that the boy is chasing is tall\". Sentences such as this one require listeners to process syntactic information because semantics is not enough to understand their meanings - e.g. either the boy or the girl could be doing the chasing, and either could be tall.", "A more nuanced view situates the functioning of Broca's area within the context of prefrontal cortex in general, noting that it may simply be a part of prefrontal cortex specialized for language BIBREF9 . The prefrontal cortex is known to be important for cognitive control, or the active maintenance of top-down attentional signals that bias processing in other areas of the brain BIBREF10 (see diagram on the right of Figure 2 ). In this framework, Broca's area can be thought of as a part of prefrontal cortex specialized for language, and responsible for selectively attending to linguistic representations housed in other areas of the brain BIBREF9 .", "The prefrontal cortex has received much attention from computational neuroscientists BIBREF10 , BIBREF11 , and one model even showed a capacity for compositional generalization BIBREF6 . However, these ideas have not been taken up in deep learning research. Here, we emphasize the idea that the brain contains two separate systems for processing syntax and semantics, where the semantic system learns and stores representations of the meanings of words, and the syntactic system, housed in Broca's area of the prefrontal cortex, learns how to selectively attend to these semantic representations according to grammatical rules." ], [ "The Syntactic Attention model improves the compositional generalization capability of an existing attention mechanism BIBREF12 by implementing two separate streams of information processing for syntax and semantics (see Figure 2 ). Here, by \"semantics\" we mean the information in each word in the input that determines its meaning (in terms of target outputs), and by \"syntax\" we mean the information contained in the input sequence that should determine the alignment of input to target words. We describe the mechanisms of this separation and the other details of the model below, following the notation of BIBREF12 , where possible." ], [ "In the seq2seq problem, models must learn a mapping from arbitrary-length sequences of inputs $ \\mathbf {x} = \\lbrace x_1, x_2, ..., x_{T_x}\\rbrace $ to arbitrary-length sequences of outputs $ \\mathbf {y} = \\lbrace y_1, y_2, ..., y_{T_y} \\rbrace $ : $ p(\\mathbf {y} | \\mathbf {x}) $ . The attention mehcanism of BIBREF12 models the conditional probability of each target word given the input sequence and previous targets: $p(y_i|y_1, y_2, ..., y_{i-1}, \\mathbf {x})$ . This is accomplished by processing the input sequence with a recurrent neural network (RNN) in the encoder. The outputs of this RNN are used both for encoding individual words in the input for later translation, and for determining their alignment to targets during decoding.", "The underlying assumption made by the Syntactic Attention architecture is that the dependence of target words on the input sequence can be separated into two independent factors. One factor, $p(y_i|x_j) $ , which we refer to as \"semantics,\" models the conditional distribution from individual words in the input to individual words in the target. Note that, unlike in the model of BIBREF12 , these $x_j$ do not contain any information about the other words in the input sequence because they are not processed with an RNN. They are \"semantic\" in the sense that they contain the information relevant to translating into the target language. The other factor, $p(j \\rightarrow i | \\mathbf {x}) $ , which we refer to as \"syntax,\" models the conditional probability that word $j$ in the input is relevant to word $i$ in the target sequence, given the entire input sequence. This alignment is accomplished from encodings of the inputs produced by an RNN. This factor is \"syntactic\" in the sense that it must capture all of the temporal information in the input that is relevant to determining the serial order of outputs. The crucial architectural assumption, then, is that any temporal dependency between individual words in the input that can be captured by an RNN should only be relevant to their alignment to words in the target sequence, and not to the translation of individual words. This assumption will be made clearer in the model description below." ], [ "The encoder produces two separate vector representations for each word in the input sequence. Unlike the previous attention model BIBREF12 ), we separately extract the semantic information from each word with a linear transformation: ", "$$m_j = W_m x_j,$$ (Eq. 8) ", "where $W_m$ is a learned weight matrix that multiplies the one-hot encodings $\\lbrace x_1, ..., x_{T_x}\\rbrace $ . Note that the semantic representation of each word does not contain any information about the other words in the sentence. As in the previous attention mechanism BIBREF12 , we use a bidirectional RNN (biRNN) to extract what we now interpret as the syntactic information from each word in the input sequence. The biRNN produces a vector for each word on the forward pass, $ (\\overrightarrow{h_1}, ..., \\overrightarrow{h_{T_x})}$ , and a vector for each word on the backward pass, $ (\\overleftarrow{h_1}, ..., \\overleftarrow{h_{T_x})}$ . The syntactic information (or \"annotations\" BIBREF12 ) of each word $x_j$ is determined by the two vectors $\\overrightarrow{h_{j-1}}$ , $\\overleftarrow{h_{j+1}}$ corresponding to the words surrounding it: ", "$$h_j = [\\overrightarrow{h_{j-1}};\\overleftarrow{h_{j+1}}]$$ (Eq. 9) ", "In all experiments, we used a bidirectional Long Short-Term Memory (LSTM) for this purpose. Note that because there is no sequence information in the semantic representations, all of the information required to parse (i.e. align) the input sequence correctly (e.g. phrase structure, modifying relationships, etc.) must be encoded by the biRNN." ], [ "The decoder models the conditional probability of each target word given the input and the previous targets: $p(y_i | y_1, y_2, ..., y_{i-1}, \\mathbf {x})$ , where $y_i$ is the target translation and $\\mathbf {x}$ is the whole input sequence. As in the previous model, we use an RNN to determine an attention distribution over the inputs at each time step (i.e. to align words in the input to the current target). However, our decoder diverges from this model in that the mapping from inputs to outputs is performed from a weighted average of the semantic representations of the input words: ", "$$d_i = \\sum _{j=1}^{T_x} \\alpha _{ij} m_j \\qquad p(y_i | y_1, y_2, ..., y_{i-1}, \\mathbf {x}) = f(d_i)$$ (Eq. 11) ", "where $f$ is parameterized by a linear function with a softmax nonlinearity, and the $\\alpha _{ij}$ are the weights determined by the attention model. We note again that the $m_j$ are produced directly from corresponding $x_j$ , and do not depend on the other inputs. The attention weights are computed by a function measuring how well the syntactic information of a given word in the input sequence aligns with the current hidden state of the decoder RNN, $s_i$ : ", "$$\\alpha _{ij} = \\frac{\\exp (e_{ij})}{\\sum _{k=1}^{T_x}\\exp (e_{ik})} \\qquad e_{ij} = a(s_{i}, h_j)$$ (Eq. 12) ", "where $e_{ij}$ can be thought of as measuring the importance of a given input word $x_j$ to the current target word $y_i$ , and $s_{i}$ is the current hidden state of the decoder RNN. BIBREF12 model the function $a$ with a feedforward network, but following BIBREF14 , we choose to use a simple dot product: ", "$$a(s_{i},h_j) = s_{i} \\cdot h_j,$$ (Eq. 13) ", "relying on the end-to-end backpropagation during training to allow the model to learn to make appropriate use of this function. Finally, the hidden state of the RNN is updated with the same weighted combination of the syntactic representations of the inputs: ", "$$s_i = g(s_{i-1}, c_{i}) \\qquad c_i = \\sum _{j=1}^{T_x} \\alpha _{ij} h_j$$ (Eq. 14) ", "where $g$ is the decoder RNN, $s_i$ is the current hidden state, and $c_i$ can be thought of as the information in the attended words that can be used to determine what to attend to on the next time step. Again, in all experiments an LSTM was used." ], [ "The SCAN dataset is composed of sequences of commands that must be mapped to sequences of actions BIBREF2 (see Figure 3 and supplementary materials for further details). The dataset is generated from a simple finite phrase-structure grammar that includes things like adverbs and conjunctions. There are 20,910 total examples in the dataset that can be split systematically into training and testing sets in different ways. These splits include the following:", "Simple split: training and testing data are split randomly", "Length split: training includes only shorter sequences", "Add primitive split: a primitive command (e.g. \"turn left\" or \"jump\") is held out of the training set, except in its most basic form (e.g. \"jump\" $\\rightarrow $ JUMP)", "Here we focus on the most difficult problem in the SCAN dataset, the add-jump split, where \"jump\" is held out of the training set. The best test accuracy reported in the original paper BIBREF2 , using standard seq2seq models, was 1.2%. More recent work has tested other kinds of seq2seq models, including Gated Recurrent Units (GRU) augmented with attention BIBREF4 and convolutional neural networks (CNNs) BIBREF15 . Here, we compare the Syntactic Attention model to the best previously reported results." ], [ "Experimental procedure is described in detail in the supplementary materials. Train and test sets were kept as they were in the original dataset, but following BIBREF4 , we used early stopping by validating on a 20% held out sample of the training set. All reported results are from runs of 200,000 iterations with a batch size of 1. Unless stated otherwise, each architecture was trained 5 times with different random seeds for initialization, to measure variability in results. All experiments were implemented in PyTorch. Details of the hyperparameter search are given in supplementary materials. Our best model used LSTMs, with 2 layers and 200 hidden units in the encoder, and 1 layer and 400 hidden units in the decoder, and 120-dimensional semantic vectors. The model included a dropout rate of 0.5, and was optimized using an Adam optimizer BIBREF16 with a learning rate of 0.001." ], [ "The Syntactic Attention model achieves state-of-the-art performance on the key compositional generalization task of the SCAN dataset (see table 1 ). The table shows results (mean test accuracy (%) $\\pm $ standard deviation) on the test splits of the dataset. Syntactic Attention is compared to the previous best models, which were a CNN BIBREF15 , and GRUs augmented with an attention mechanism (\"+ attn\"), which either included or did not include a dependency (\"- dep\") in the decoder on the previous action BIBREF4 . The best model from the hyperparameter search showed strong compositional generalization performance, attaining a mean accuracy of 91.1% (median = 98.5%) on the test set of the add-jump split. However, as in BIBREF15 , we found that our model showed variance across initialization seeds. We suggest that this may be due to the nature of the add-jump split: since \"jump\" has only been encountered in the simplest context, it may be that slight changes to the way that this verb is encoded can make big differences when models are tested on more complicated constructions. For this reason, we ran the best model 25 times on the add-jump split to get a more accurate assessment of performance. These results were highly skewed, with a mean accuracy of 78.4 % but a median of 91.0 % (see supplementary materials for detailed results). Overall, this represents an improvement over the best previously reported results on this task BIBREF4 , BIBREF15 , and does so without any hand-engineered features or additional supervision." ], [ "To test our hypothesis that compositional generalization requires a separation between syntax (i.e. sequential information used for alignment), and semantics (i.e. the mapping from individual source words to individual targets), we conducted two more experiments:", "Sequential semantics. An additional biLSTM was used to process the semantics of the sentence: $m_j = [\\overrightarrow{m_j};\\overleftarrow{m_j}]$ , where $\\overrightarrow{m_j}$ and $\\overleftarrow{m_j}$ are the vectors produced for the source word $x_j$ by a biLSTM on the forward and backward passes, respectively. These $m_j$ replace those generated by the simple linear layer in the Syntactic Attention model (in equation ( 8 )).", "Syntax-action. Syntactic information was allowed to directly influence the output at each time step in the decoder: $p(y_i|y_1, y_2, ..., y_{i-1}, \\mathbf {x}) = f([d_i; c_i])$ , where again $f$ is parameterized with a linear function and a softmax output nonlinearity.", "The results of the additional experiments (mean test accuracy (%) $\\pm $ standard deviations) are shown in table 2 . These results partially confirmed our hypothesis: performance on the jump-split test set was worse when the strict separation between syntax and semantics was violated by allowing sequential information to be processed in the semantic stream. However, \"syntax-action,\" which included sequential information produced by a biLSTM (in the syntactic stream) in the final production of actions, maintained good compositional generalization performance. We hypothesize that this was because in this setup, it was easier for the model to learn to use the semantic information to directly translate actions, so it largely ignored the syntactic information. This experiment suggests that the separation between syntax and semantics does not have to be perfectly strict, as long as non-sequential semantic representations are available for direct translation." ], [ "The Syntactic Attention model was designed to incorporate a key principle that has been hypothesized to describe the organization of the linguistic brain: mechanisms for learning rule-like or syntactic information are separated from mechanisms for learning semantic information. Our experiments confirm that this simple organizational principle encourages systematicity in recurrent neural networks in the seq2seq setting, as shown by the substantial improvement in the model's performance on the compositional generalization tasks in the SCAN dataset.", "The model makes the assumption that the translation of individual words in the input should be independent of their alignment to words in the target sequence. To this end, two separate encodings are produced for the words in the input: semantic representations in which each word is not influenced by other words in the sentence, and syntactic representations which are produced by an RNN that can capture temporal dependencies in the input sequence (e.g. modifying relationships, binding to grammatical roles). Just as Broca's area of the prefrontal cortex is thought to play a role in syntactic processing through a dynamic selective-attention mechanism that biases processing in other areas of the brain, the syntactic system in our model encodes serial information and is constrained to influence outputs through an attention mechanism alone.", "Patients with lesions to Broca's area are able to comprehend sentences like \"The girl is kicking a green ball\", where semantics can be used to infer the grammatical roles of the words (e.g. that the girl, not the ball, is doing the kicking) BIBREF8 . However, these patients struggle with sentences such as \"The girl that the boy is chasing is tall\", where the sequential order of the words, rather than semantics, must be used to infer grammatical roles (e.g. either the boy or the girl could be doing the chasing). In our model, the syntactic stream can be seen as analogous to Broca's area, because without it the model would not be able to learn about the temporal dependencies that determine the grammatical roles of words in the input.", "The separation of semantics and syntax, which is in the end a constraint, forces the model to learn, in a relatively independent fashion, 1) the individual meanings of words and 2) how the words are being used in a sentence (e.g. how they can modify one another, what grammatical role each is playing, etc.). This encourages systematic generalization because, even if a word has only been encountered in a single context (e.g. \"jump\" in the add-jump split), as long as its syntactic role is known (e.g. that it is a verb that can be modified by adverbs such as \"twice\"), it can be used in many other constructions that follow the rules for that syntactic role (see supplementary materials for visualizations). Additional experiments confirmed this intuition, showing that when sequential information is allowed to be processed by the semantic system (\"sequential semantics\"), systematic generalization performance is substantially reduced.", "The Syntactic Attention model bears some resemblance to a symbolic system - the paradigm example of systematicity - in the following sense: in symbolic systems, representational content (e.g. the value of a variable stored in memory) is maintained separately from the computations that are performed on that content. This separation ensures that the manipulation of the content stored in variables is fairly independent of the content itself, and will therefore generalize to arbitrary elements. Our model implements an analogous separation, but in a purely neural architecture that does not rely on hand-coded rules or additional supervision. In this way, it can be seen as transforming a difficult out-of-domain (o.o.d.) generalization problem into two separate i.i.d. generalization problems - one where the individual meanings of words are learned, and one where how words are used (e.g. how adverbs modify verbs) is learned (see Figure 4 ).", "It is unlikely that the human brain has such a strict separation between semantic and syntactic processing, and in the end, there must be more of an interaction between the two streams. We expect that the separation between syntax and semantics in the brain is only a relative one, but we have shown here that this kind of separation can be useful for encouraging systematicity and allowing for compositional generalization." ], [ "Our model integrates ideas from computational and cognitive neuroscience BIBREF9 , BIBREF11 , BIBREF6 , BIBREF10 , into the neural machine translation framework. Much of the work in neural machine translation uses an encoder-decoder framework, where one RNN is used to encode the source sentence, and then a decoder neural network decodes the representations given by the encoder to produce the words in the target sentence BIBREF17 . Earlier work attempted to encode the source sentence into a single fixed-length vector (the final hidden state of the encoder RNN), but it was subsequently shown that better performance could be achieved by encoding each word in the source, and using an attention mechanism to align these encodings with each target word during the decoding process BIBREF12 . The current work builds directly on this attention model, while incorporating a separation between syntactic and semantic information streams.", "The principle of compositionality has recently regained the attention of deep learning researchers BIBREF18 , BIBREF19 , BIBREF0 , BIBREF2 , BIBREF20 , BIBREF21 . In particular, the issue has been explored in the visual-question answering (VQA) setting BIBREF18 , BIBREF14 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Many of the successful models in this setting learn hand-coded operations BIBREF18 , BIBREF23 , use highly specialized components BIBREF14 , BIBREF24 , or use additional supervision BIBREF23 , BIBREF25 . In contrast, our model uses standard recurrent networks and simply imposes the additional constraint that syntactic and semantic information are processed in separate streams.", "Some of the recent research on compositionality in machine learning has had a special focus on the use of attention. For example, in the Compositional Attention Network, built for VQA, a strict separation is maintained between the representations used to encode images and the representations used to encode questions BIBREF14 . This separation is enforced by restricting them to interact only through attention distributions. Our model utilizes a similar restriction, reinforcing the idea that compositionality is enhanced when information from different modalities (in our case syntax and semantics) are only allowed to interact through discrete probability distributions.", "Previous research on compositionality in machine learning has also focused on the incorporation of symbol-like processing into deep learning models BIBREF18 , BIBREF23 , BIBREF25 . These methods generally rely on hand-coding or additional supervision for the symbolic representations or algorithmic processes to emerge. For example, in neural module networks BIBREF18 , a neural network is constructed out of composable neural modules that each learn a specific operation. These networks have shown an impressive capacity for systematic generalization on VQA tasks BIBREF19 . These models can be seen as accomplishing a similar transformation as depicted in Figure 4 , because the learning in each module is somewhat independent of the mechanism that composes them. However, BIBREF19 find that when these networks are trained end-to-end (i.e. without hand-coded parameterizations and layouts) their systematicity is significantly degraded.", "In contrast, our model learns in an end-to-end way to generalize systematically without any explicit symbolic processes built in. This offers an alternative way in which symbol-like processing can be achieved with neural networks - by enforcing a separation between mechanisms for learning representational content (semantics) and mechanisms for learning how to dynamically attend to or manipulate that content (syntax) in the context of a cognitive operation or reasoning problem." ], [ "The Syntactic Attention model incorporates ideas from cognitive and computational neuroscience into the neural machine translation framework, and produces the kind of systematic generalization thought to be a key component of human language-learning and intelligence. The key feature of the architecture is the separation of sequential information used for alignment (syntax) from information used for mapping individual inputs to outputs (semantics). This separation allows the model to generalize the usage of a word with known syntax to many of its valid grammatical constructions. This principle may be a useful heuristic in other natural language processing tasks, and in other systematic or compositional generalization tasks. The success of our approach suggests a conceptual link between dynamic selective-attention mechanisms in the prefrontal cortex and the systematicity of human cognition, and points to the untapped potential of incorporating ideas from cognitive science and neuroscience into modern approaches in deep learning and artificial intelligence BIBREF26 ." ], [ "The SCAN dataset BIBREF2 generates sequences of commands using the pharase-structure grammar described in Figure 5 . This simple grammar is not recursive, and so can generate a finite number of command sequences (20,910 total).", "These commands are interpreted according to the rules shown in Figure 6 . Although the grammar used to generate and interpret the commands is simple compared to any natural language, it captures the basic properties that are important for testing compositionality (e.g. modifying relationships, discrete grammatical roles, etc.). The add-primitive splits (described in main text) are meant to be analogous to the capacity of humans to generalize the usage of a novel verb (e.g. \"dax\") to many constructions BIBREF2 ." ], [ "The cluster used for all experiments consists of 3 nodes, with 68 cores in total (48 times Intel(R) Xeon(R) CPU E5-2650 v4 at 2.20GHz, 20 times Intel(R) Xeon(R) CPU E5-2650 v3 at 2.30GHz), with 128GB of ram each, connected through a 56Gbit infiniband network. It has 8 pascal Titan X GPUs and runs Ubuntu 16.04.", "All experiments were conducted with the SCAN dataset as it was originally published BIBREF2 . No data were excluded, and no preprocessing was done except to encode words in the input and action sequences into one-hot vectors, and to add special tokens for start-of-sequence and end-of-sequence tokens. Train and test sets were kept as they were in the original dataset, but following BIBREF4 , we used early stopping by validating on a 20% held out sample of the training set. All reported results are from runs of 200,000 iterations with a batch size of 1. Except for the additional batch of 25 runs for the add-jump split, each architecture was trained 5 times with different random seeds for initialization, to measure variability in results. All experiments were implemented in PyTorch.", "Initial experimentation included different implementations of the assumption that syntactic information be separated from semantic information. After the architecture described in the main text showed promising results, a hyperparameter search was conducted to determine optimization (stochastic gradient descent vs. Adam), RNN-type (GRU vs. LSTM), regularizers (dropout, weight decay), and number of layers (1 vs. 2 layers for encoder and decoder RNNs). We found that the Adam optimizer BIBREF16 with a learning rate of 0.001, two layers in the encoder RNN and 1 layer in the decoder RNN, and dropout worked the best, so all further experiments used these specifications. Then, a grid-search was conducted to find the number of hidden units (in both semantic and syntactic streams) and dropout rate. We tried hidden dimensions ranging from 50 to 400, and dropout rates ranging from 0.0 to 0.5.", "The best model used an LSTM with 2 layers and 200 hidden units in the encoder, and an LSTM with 1 layer and 400 hidden units in the decoder, and used 120-dimensional semantic vectors, and a dropout rate of 0.5. The results for this model are reported in the main text. All additional experiments were done with models derived from this one, with the same hyperparameter settings.", "All evaluation runs are reported in the main text: for each evaluation except for the add-jump split, models were trained 5 times with different random seeds, and performance was measured with means and standard deviations of accuracy. For the add-jump split, we included 25 runs to get a more accurate assessment of performance. This revealed a strong skew in the distribution of results, so we included the median as the main measure of performance. Occasionally, the model did not train at all due to an unknown error (possibly very poor random initialization, high learning rate or numerical error). For this reason, we excluded runs in which training accuracy did not get above 10%. No other runs were excluded." ], [ "As mentioned in the results section of the main text, we found that test accuracy on the add-jump split was variable and highly skewed. Figure 7 shows a histogram of these results (proportion correct). The model performs near-perfectly most of the time, but is also prone to catastrophic failures. This may be because, at least for our model, the add-jump split represents a highly nonlinear problem in the sense that slight differences in the way the primitive verb \"jump\" is encoded during training can have huge differences for how the model performs on more complicated constructions. We recommend that future experiments with this kind of compositional generalization problem take note of this phenomenon, and conduct especially comprehensive analyses of variability in results. Future research will also be needed to better understand the factors that determine this variability, and whether it can be overcome with other priors or regularization techniques." ], [ "Our main hypothesis is that the separation between sequential information used for alignment (syntax) and information about the meanings of individual words (semantics) encourages systematicity. The results reported in the main text are largely consistent with this hypothesis, as shown by the performance of the Syntactic Attention model on the composotional generalization tests of the SCAN dataset. However, it is also possible that the simplicity of the semantic stream in the model is also important for improving compositional generalization. To test this, we replaced the linear layer in the semantic stream with a nonlinear neural network. From the model description in the main text: ", "$$p(y_i|y_1, y_2, ..., y_{i-1}, \\mathbf {x}) = f(d_i),$$ (Eq. 37) ", "In the original model, $f$ was parameterized with a simple linear layer, but here we use a two-layer feedforward network with a ReLU nonlinearity, before a softmax is applied to generate a distribution over the possible actions. We tested this model on the add-primitive splits of the SCAN dataset. The results (mean (%) with standard deviations) are shown in Table 3 , with comparison to the baseline Syntactic Attention model.", "The results show that this modification did not substantially degrade compositional generalization performance, suggesting that the success of the Syntactic Attention model does not depend on the parameterization of the semantic stream with a simple linear function.", "The original SCAN dataset was published with compositional generalization splits that have more than one example of the held-out primitive verb BIBREF2 . The training sets in these splits of the dataset include 1, 2, 4, 8, 16, or 32 random samples of command sequences with the \"jump\" command, allowing for a more fine-grained measurement of the ability to generalize the usage of a primitive verb from few examples. For each number of \"jump\" commands included in the training set, five different random samples were taken to capture any variance in results due to the selection of particular commands to train on.", " BIBREF2 found that their best model (an LSTM without an attention mechansim) did not generalize well (below 39%), even when it was trained on 8 random examples that included the \"jump\" command, but that the addition of further examples to the training set improved performance. Subsequent work showed better performance at lower numbers of \"jump\" examples, with GRU's augmented with an attention mechanism (\"+ attn\"), and either with or without a dependence in the decoder on the previous target (\"- dep\") BIBREF4 . Here, we compare the Syntactic Attention model to these results.", "The Syntactic Attention model shows a substantial improvement over previously reported results at the lowest numbers of \"jump\" examples used for training (see Figure 8 and Table 4 ). Compositional generalization performance is already quite high at 1 example, and at 2 examples is almost perfect (99.997% correct).", "The compositional generalization splits of the SCAN dataset were originally designed to test for the ability to generalize known primitive verbs to valid unseen constructions BIBREF2 . Further work with SCAN augmented this set of tests to include compositional generalization based not on known verbs but on known templates BIBREF3 . These template splits included the following (see Figure 9 for examples):", "Jump around right: All command sequences with the phrase \"jump around right\" are held out of the training set and subsequently tested.", "Primitive right: All command sequences containing primitive verbs modified by \"right\" are held out of the training set and subsequently tested.", "Primitive opposite right: All command sequences containing primitive verbs modified by \"opposite right\" are held out of the training set and subsequently tested.", "Primitive around right: All command sequences containing primitive verbs modified by \"around right\" are held out of the training set and subsequently tested.", "Results of the Syntactic Attention model on these template splits are compared to those originally published BIBREF3 in Table 5 . The model, like the one reported in BIBREF3 , performs well on the jump around right split, consistent with the idea that this task does not present a problem for neural networks. The rest of the results are mixed: Syntactic Attention shows good compositional generalization performance on the Primitive right split, but fails on the Primitive opposite right and Primitive around right splits. All of the template tasks require models to generalize based on the symmetry between \"left\" and \"right\" in the dataset. However, in the opposite right and around right splits, this symmetry is substantially violated, as one of the two prepositional phrases in which they can occur is never seen with \"right.\" Further research is required to determine whether a model implementing similar principles to Syntactic Attention can perform well on this task." ], [ "The way that the attention mechanism of BIBREF12 is set up allows for easy visualization of the model's attention. Here, we visualize the attention distributions over the words in the command sequence at each step during the decoding process. In the following figures (Figures 10 to 15 ), the attention weights on each command (in the columns of the image) is shown for each of the model's outputs (in the rows of the image) for some illustrative examples. Darker blue indicates a higher weight. The examples are shown in pairs for a model trained and tested on the add-jump split, with one example drawn from the training set and a corresponding example drawn from the test set. Examples are shown in increasing complexity, with a failure mode depicted in Figure 15 .", "In general, it can be seen that although the attention distributions on the test examples are not exactly the same as those from the corresponding training examples, they are usually good enough for the model to produce the correct action sequence. This shows the model's ability to apply the same syntactic rules it learned on the other verbs to the novel verb \"jump.\" In the example shown in Figure 15 , the model fails to attend to the correct sequence of commands, resulting in an error." ] ], "section_name": [ "Introduction", "Syntax and prefrontal cortex", "Syntactic Attention", "Separation assumption", "Encoder", "Decoder", "SCAN dataset", "Implementation details", "Results", "Additional experiments", "Dicussion", "Other related work", "Conclusion", "SCAN dataset details", "Experimental procedure details", "Skew of add-jump results", "Supplementary experiments", "Visualizing attention" ] }
{ "answers": [ { "annotation_id": [ "46604f501a5888f4a17f691db884d1b3bb1f8674" ], "answer": [ { "evidence": [ "A recently published dataset called SCAN BIBREF2 (Simplified version of the CommAI Navigation tasks), tests compositional generalization in a sequence-to-sequence (seq2seq) setting by systematically holding out of the training set all inputs containing a basic primitive verb (\"jump\"), and testing on sequences containing that verb. Success on this difficult problem requires models to generalize knowledge gained about the other primitive verbs (\"walk\", \"run\" and \"look\") to the novel verb \"jump,\" without having seen \"jump\" in any but the most basic context (\"jump\" $\\rightarrow $ JUMP). It is trivial for human learners to generalize in this way (e.g. if I tell you that \"dax\" is a verb, you can generalize its usage to all kinds of constructions, like \"dax twice and then dax again\", without even knowing what the word means) BIBREF2 . However, standard recurrent seq2seq models fail miserably on this task, with the best-reported model (a gated recurrent unit augmented with an attention mechanism) achieving only 12.5% accuracy on the test set BIBREF2 , BIBREF4 . Recently, convolutional neural networks (CNN) were shown to perform better on this test, but still only achieved 69.2% accuracy on the test set." ], "extractive_spans": [], "free_form_answer": "it systematically holds out inputs in the training set containing basic primitive verb, \"jump\", and tests on sequences containing that verb.", "highlighted_evidence": [ "A recently published dataset called SCAN BIBREF2 (Simplified version of the CommAI Navigation tasks), tests compositional generalization in a sequence-to-sequence (seq2seq) setting by systematically holding out of the training set all inputs containing a basic primitive verb (\"jump\"), and testing on sequences containing that verb." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "five" ], "paper_read": [ "no" ], "question": [ "How does the SCAN dataset evaluate compositional generalization?" ], "question_id": [ "7182f6ed12fa990835317c57ad1ff486282594ee" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "" ], "topic_background": [ "unfamiliar" ] }
{ "caption": [ "Figure 1: Simplified illustration of out-of-domain (o.o.d.) extrapolation required by SCAN compositional generalization task. Shapes represent the distribution of all possible command sequences. In a simple split, train and test data are independent and identically distributed (i.i.d.), but in the add-primitive splits, models are required to extrapolate out-of-domain from a single example.", "Figure 2: (left) Syntactic Attention architecture. Syntactic and semantic information are maintained in separate streams. The semantic stream processes words with a simple linear transformation, so that sequential information is not maintained. This information is used to directly produce actions. The syntactic stream processes inputs with a recurrent neural network, allowing it to capture temporal dependencies between words. This stream determines the attention over semantic representations at each time step during decoding. (right) Diagram of an influential computational model of prefrontal cortex (PFC) [21]. Prefrontal cortex dynamically modulates processes in other parts of the brain through top-down selective attention signals. A part of the prefrontal cortex, Broca’s area, is thought to be important for syntactic processing [26]. Figure reproduced from [20].", "Table 2: Results of additional experiments. Star* indicates median of 25 runs.", "Figure 4: Illustration of the transformation of an out-of-domain (o.o.d.) generalization problem into two independent, identically distributed (i.i.d.) generalization problems. This transformation is accomplished by the Syntactic Attention model without hand-coding grammatical rules or supervising with additional information such as parts-of-speech tags.", "Figure 5: Phrase-structure grammar used to generate SCAN dataset. Figure reproduced from [15].", "Figure 6: Rules for interpreting command sequences to generate actions in SCAN dataset. Figure reproduced from [15].", "Figure 7: Histogram of test accuracies across all 25 runs of add-jump split.", "Figure 8: Compositional generalization performance on add-jump split with additional examples. Syntactic Attention model is compared to previously reported models [4] on test accuracy as command sequences with \"jump\" are added to the training set. Mean accuracy (proportion correct) was computed with 5 different random samples of \"jump\" commands. Error bars represent standard deviations.", "Table 4: Results of Syntactic Attention compared to models of Bastings et al. [4] on jump-split with additional examples. Mean accuracy (% - rounded to tenths) is shown with standard deviations. Same data as depicted in Figure 8.", "Figure 9: Table of example command sequences for each template split. Reproduced from [17] .", "Table 5: Results of Syntactic Attention compared to models of Loula et al. [17] on template splits of SCAN dataset. Mean accuracy (%) is shown with standard deviations. P = Primitive", "Figure 10: Attention distributions: correct example", "Figure 11: Attention distributions: correct example", "Figure 12: Attention distributions: correct example", "Figure 13: Attention distributions: correct example", "Figure 14: Attention distributions: correct example", "Figure 15: Attention distributions: incorrect example" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "7-Table2-1.png", "8-Figure4-1.png", "11-Figure5-1.png", "11-Figure6-1.png", "12-Figure7-1.png", "14-Figure8-1.png", "14-Table4-1.png", "14-Figure9-1.png", "15-Table5-1.png", "15-Figure10-1.png", "16-Figure11-1.png", "16-Figure12-1.png", "16-Figure13-1.png", "17-Figure14-1.png", "18-Figure15-1.png" ] }
[ "How does the SCAN dataset evaluate compositional generalization?" ]
[ [ "1904.09708-Introduction-1" ] ]
[ "it systematically holds out inputs in the training set containing basic primitive verb, \"jump\", and tests on sequences containing that verb." ]
529
1902.09393
Cooperative Learning of Disjoint Syntax and Semantics
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis.
{ "paragraphs": [ [ "This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition." ], [ "The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper." ], [ "Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection \"The First Page\" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section \"Length of Submission\" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.", "By uncommenting \\aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \\def\\aclpaperid{***} definition at the top.", "The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \\aclfinalcopy is commented out.", "The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline." ], [ "The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \\aclfinalcopy command in the document preamble.)", "Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ )." ], [ "NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings." ], [ "For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.", "Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.", "It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \\special{papersize=210mm,297mm} in the latex preamble (directly below the \\usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.", "Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible." ], [ "Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:", "Left and right margins: 2.5 cm", "Top margin: 2.5 cm", "Bottom margin: 2.5 cm", "Column width: 7.7 cm", "Column height: 24.7 cm", "Gap between columns: 0.6 cm", "Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible." ], [ "For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting", "\\usepackage{times}", "\\usepackage{latexsym}", "in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font." ], [ "Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.", "Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.", "The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.", "Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.", "Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.", "Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title." ], [ "Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.", "Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \\cite and the latter with \\shortcite or \\newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \\cite command, e.g., \\cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.", "We suggest that instead of", "“ BIBREF0 showed that ...”", "you use", "“Gusfield Gusfield:97 showed that ...”", "If you are using the provided and Bib style files, you can use the command \\citet (cite in text) to get “author (year)” citations.", "If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:", "\\usepackage[nohyperref]{naaclhlt2019}", "Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.", "As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.", "As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,", "“We previously showed BIBREF0 ...”", "should be avoided. Instead, use citations such as", "“ BIBREF0 Gusfield:97 previously showed ... ”", "Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.", "Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.", "References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \\bibliography commands near the end for more.", "Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .", "The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.", "Example citing an arxiv paper: BIBREF7 .", "Example article in journal citation: BIBREF8 .", "Example article in proceedings, with location: BIBREF9 .", "Example article in proceedings, without location: BIBREF10 .", "See corresponding .bib file for further details.", "Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.", "Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix." ], [ "Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line." ], [ "Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.", "Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments." ], [ "In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color." ], [ "It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”." ], [ "The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.", "NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix \"Appendices\" and Appendix \"Supplemental Material\" for further information.", "Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source." ], [ "The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.", "Preparing References:", "Include your own bib file like this: \\bibliographystyle{acl_natbib} \\begin{thebibliography}{50} ", "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings EMNLP 2015, pages 632–642.", "Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the ACL 2016, Volume 1: Long Papers.", "Michael B. Chang, Abhishek Gupta, Sergey Levine, and Thomas L. Griffiths. 2018. Automatically composing representation transformations as a means for generalization. CoRR, abs/1807.04640.", "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of ACL 2017, Volume 1: Long Papers, pages 1657–1668.", "Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of AAAI 2018.", "Caio Corro and Ivan Titov. 2018. Differentiable perturb-and-parse: Semi-supervised parsing with a structured variational autoencoder. CoRR, abs/1807.09875.", "Sreerupa Das, C Lee Giles, and Guo-Zheng Sun. 1992. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In Proceedings of CogSci 1992, page 14.", "Andrew Drozdov and Samuel Bowman. 2017. The coadaptation problem when learning how and what to compose. Proceedings of the 2nd Workshop on Representation Learning for NLP.", "C. Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, and Dong Chen. 1989. Higher order recurrent networks and grammatical inference. In Proceedings of NIPS 1989, pages 380–387.", "Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. Neural Networks, 1:347–352.", "Will Grathwohl, Dami Choi, Yuhuai Wu, Geoffrey Roeder, and David K. Duvenaud. 2017. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. CoRR, abs/1711.00123.", "Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.", "Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. CoRR, abs/1611.01144.", "Armand Joulin and Tomas Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Proceedings of NIPS 2015, pages 190–198.", "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", "Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational bayes. CoRR, abs/1312.6114.", "Phong Le and Willem H. Zuidema. 2015. The forest convolutional network: Compositional distributional semantics with a neural chart and without binarization. In Proceedings of EMNLP 2015, pages 1155–1164.", "Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. Deep learning with dynamic computation graphs. arXiv preprint arXiv:1702.02181.", "Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. CoRR, abs/1611.00712.", "Jean Maillard and Stephen Clark. 2018. Latent tree learning with differentiable parsers: Shift-reduce parsing and chart parsing. CoRR, abs/1806.00840.", "Jean Maillard, Stephen Clark, and Dani Yogatama. 2017. Jointly learning sentence embeddings and syntax with unsupervised tree-lstms. CoRR, abs/1705.09189.", "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of NIPS 2017, pages 6294–6305.", "Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030.", "Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of ICML 2016, pages 1928–1937.", "Michael Mozer and Sreerupa Das. 1992. A connectionist symbol manipulator that discovers the structure of context-free languages. In Proceedings of NIPS 1992, pages 863–870.", "Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In Proceedings of EACL 2017, volume 1, page 11. NIH Public Access.", "Nikita Nangia and Samuel R. Bowman. 2018. Listops: A diagnostic dataset for latent tree learning. In Proceedings of NAACL-HLT 2018, Student Research Workshop, pages 92–99.", "Barbara BH Partee, Alice G ter Meulen, and Robert Wall. 1990. Mathematical methods in linguistics, volume 30. Springer Science & Business Media.", "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532–1543.", "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.", "Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444.", "Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of CVPR 2017, pages 1179–1195.", "Sheldon M. Ross. 1997. Simulation (2. ed.). Statistical modeling and decision science. Academic Press.", "Himanshu Sahni, Saurabh Kumar, Farhan Tejani, and Charles L. Isbell. 2017. Learning to compose skills. CoRR, abs/1711.11289.", "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. CoRR, abs/1707.06347.", "Haoyue Shi, Hao Zhou, Jiaze Chen, and Lei Li. 2018. On tree-based neural sentence modeling. CoRR, abs/1808.09644.", "Satinder P. Singh. 1992. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8:323–339.", "Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of ICML 2011, pages 129–136.", "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013a. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631–1642.", "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631–1642.", "G Sun. 1990. Connectionist pushdownautomata that learn context-free gram-mars. In Proc. IJCNN'90, volume 1, pages 577–580.", "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL 2015, Volume 1: Long Papers, pages 1556–1566.", "George Tucker, Andriy Mnih, Chris J. Maddison, John Lawson, and Jascha Sohl-Dickstein. 2017. REBAR: low-variance, unbiased gradient estimates for discrete latent variable models. In Proceedings of NIPS 2017, pages 2624–2633.", "Claire Cardie Vlad Niculae, André F. T. Martins. 2018. Towards dynamic computation graphs via sparse latent structure. CoRR, abs/1809.00653.", "Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? TACL, 6:253–267.", "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT 2018, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics.", "Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256.", "Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2016. Learning to compose words into sentences with reinforcement learning. CoRR, abs/1611.09100.", "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.", "Xiao-Dan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of ICML 2015, pages 1604–1612.", "|", "where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \\appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. " ] ], "section_name": [ "Credits", "Introduction", "General Instructions", "The Ruler", "Electronically-available resources", "Format of Electronic Manuscript", "Layout", "Fonts", "The First Page", "Sections", "Footnotes", "Graphics", "Accessibility", "Translation of non-English Terms", "Length of Submission", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "467cf1dcbdef566076d9bc7a2a7d97c3f35e8706" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ], "extractive_spans": [], "free_form_answer": "The system outperforms by 27.7% the LSTM model, 38.5% the RL-SPINN model and 41.6% the Gumbel Tree-LSTM", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "annotation_id": [ "dade8cea8da307142ba5475cb8e34bb176b79a0a" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ], "extractive_spans": [], "free_form_answer": "The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "How much does this system outperform prior work?", "What are the baseline systems that are compared against?" ], "question_id": [ "af75ad21dda25ec72311c2be4589efed9df2f482", "de12e059088e4800d7d89e4214a3997994dbc0d9" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018).", "Figure 1: Blue crosses depict an average accuracy of five models on the test examples that have lengths within certain range. Black circles illustrate individual models.", "Table 2: Accuracy on ListOps test set for our model with three different baselines, with and without PPO. We use K = 15 for PPO.", "Figure 2: The distributions of cosine similarity for elements from the different sets of mathematical expressions. A logarithmic scale is used for y-axis.", "Table 3: Results on SNLI. *: publicly available code and hyperparameter optimization was used to obtain results. †: results are taken from Williams et al. (2018a)", "Table 4: Results on MultiNLI. †: results are taken from Williams et al. (2018a).", "Table 5: Accuracy results of models on the SST. All the numbers are from Choi et al. (2018) but ∗ where we used their publicly available code and performed hyperparameter optimization." ], "file": [ "5-Table1-1.png", "6-Figure1-1.png", "7-Table2-1.png", "7-Figure2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png" ] }
[ "How much does this system outperform prior work?", "What are the baseline systems that are compared against?" ]
[ [ "1902.09393-5-Table1-1.png" ], [ "1902.09393-5-Table1-1.png" ] ]
[ "The system outperforms by 27.7% the LSTM model, 38.5% the RL-SPINN model and 41.6% the Gumbel Tree-LSTM", "The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM" ]
531
1909.13695
Non-native Speaker Verification for Spoken Language Assessment
Automatic spoken language assessment systems are becoming more popular in order to handle increasing interests in second language learning. One challenge for these systems is to detect malpractice. Malpractice can take a range of forms, this paper focuses on detecting when a candidate attempts to impersonate another in a speaking test. This form of malpractice is closely related to speaker verification, but applied in the specific domain of spoken language assessment. Advanced speaker verification systems, which leverage deep-learning approaches to extract speaker representations, have been successfully applied to a range of native speaker verification tasks. These systems are explored for non-native spoken English data in this paper. The data used for speaker enrolment and verification is mainly taken from the BULATS test, which assesses English language skills for business. Performance of systems trained on relatively limited amounts of BULATS data, and standard large speaker verification corpora, is compared. Experimental results on large-scale test sets with millions of trials show that the best performance is achieved by adapting the imported model to non-native data. Breakdown of impostor trials across different first languages (L1s) and grades is analysed, which shows that inter-L1 impostors are more challenging for speaker verification systems.
{ "paragraphs": [ [ "Automatic spoken assessment systems are becoming increasingly popular, especially for English with the high demand around the world for learning of English as a second language BIBREF0, BIBREF1, BIBREF2, BIBREF3. In addition to assessing a candidate's English ability such as fluency and pronunciation and giving feedback to the candidate, these automatic systems also need to ensure the integrity of the candidate's score by detecting malpractice, as shown in Figure FIGREF1. Malpractice is the action by a candidate that breaks the assessment regulation and potentially threatens the reliability of the exam and associated certification. Malpractice can take a range of forms in spoken language assessment scenarios, such as using or trying to use unauthorised materials, impersonation, speaking irrelevant to prompts/questions, speaking in his/her first language (L1) instead of the target language for spoken tests, etc. This work aims to investigate the problem of automatically detecting impersonation, in which a candidate attempts to impersonate another in a speaking test. This is closely related to speaker verification.", "", "Speaker verification is the process to accept or reject an identity claim by comparing the speaker-specific information extracted from the verification speech with that from the enrolment speech of the claimed identity. These approaches can be directly applied to detect impersonation in spoken language tests. The performance of speaker verification systems has advanced considerably in the last decade with the development of i-vector modelling BIBREF4, in which a speech segment or a speaker is represented as a low-dimensional feature vector. Extraction of i-vectors is normally based on a Gaussian mixture model (GMM) based universal background model (UBM). This fixed length representation can then be used with a probabilistic linear discriminant analysis (PLDA) model to produce verification scores by comparing speaker representations, which are then used to make valid or impostor speaker decisions BIBREF5, BIBREF6, BIBREF7, BIBREF8. Recently, with developments in deep learning, performance of speaker verification systems has been improved by replacing the GMM with a deep neural network (DNN) to derive statistics for extracting speaker representations. This DNN is usually trained to take a fixed length window of the acoustics and discriminate between speakers using supplied speaker labels as targets. To handle the variable-length nature of the acoustic signal, a pooling layer is used to yield the final fixed-dimensional speaker representation. In BIBREF9, a DNN was trained at the frame level, and pooling was performed by averaging activation vectors of the last hidden layer over all frames of an input utterance. In BIBREF10, BIBREF11, BIBREF12, segment-level embeddings were extracted, which are referred to as x-vectors BIBREF12 with data augmentation. By leveraging data augmentation based on background noise and acoustic reverberation, these x-vectors based systems can achieve better performance than i-vector and d-vector based systems on standard speaker verification tasks.", "There has been some previous work on tasks related to non-native speech data using speaker verification approaches, such as detection of non-native speech BIBREF13, classification of native/non-native English BIBREF14 and L1 detection BIBREF15. In BIBREF16, meta-data (L1) sensitive bottleneck features were employed within the i-vector framework to improve the performance of speaker verification with non-native speech. In contrast, this paper focuses on making use of the state-of-the-art deep-learning based speaker verification approaches to detect candidate impersonation in an English speaking test. As there is limited amounts of data available for the non-native learner task, it is of interest to investigate adapting a standard speaker verification task to this non-native task. Here a system based on the VoxCeleb dataset BIBREF17, BIBREF18 is adapted to the BULATS task. Two forms of adaptation are examined: modifying the PLDA distance measure; and adapting the process for extracting the speaker representation by “fine-tuning\" the network to the target domain. Furthermore, detailed analysis of performance is also done with respect to speaker attributes. Gender is an important attribute in impostor selection for standard speaker verification tasks, and for non-native speech, there are two additional speaker attributes: the L1 and the language proficiency level, which should also be taken into consideration for speaker verification.", "This paper is organised as follows. Section 2 gives an overview of speaker verification systems, and Section 3 introduces the non-native spoken English corpora used in this work. Experimental setup is described in Section 4, results and analysis are detailed in Section 5, and finally, conclusions are drawn in Section 6." ], [ "In this work both i-vector and x-vector representations are used. For the i-vector speaker representation the form described in BIBREF4, BIBREF19 is used. This section will just discuss the x-vector speaker representation as this is the form that is adapted to the non-native verification task." ], [ "There are three blocks to form the DNN for extracting the utterance-level speaker representation, or embedding. The first block of the deep embedding extractor is a frame-level feature extractor. The input to this block is a sequence of acoustic feature vectors $\\lbrace \\mathbf {x}_{1},\\mathbf {x}_{2},\\cdots \\mathbf {x}_{T}\\rbrace $ of $T$ frames. This part normally consists of a number of hidden layers such as long short-term memory (LSTM) BIBREF20 or time delay neural network (TDNN) layers BIBREF11, BIBREF12. The activations of the last hidden layer of this block for the input frames, $\\lbrace \\mathbf {h}_{1},\\mathbf {h}_{2},\\cdots \\mathbf {h}_{T}\\rbrace $, form the input to the second block which is a statistics pooling layer. This layer converts variable-length frame-level features into a fixed-dimensional vector by calculating the mean vector, $\\mu $ and standard deviation vector $\\sigma $ of the frame-level feature vectors over the $T$ frames. The third block takes the statistics as the input and produces utterance-level representations using a number of stacked fully-connected hidden layers. The output of the DNN extractor is a softmax layer, and each of the nodes corresponds to one speaker identity. This DNN extractor is trained based on a cross-entropy loss function using the supplied speaker labels to get the targets. Consider there are $N$ training segments and $S$ speakers, the cross-entropy can be written as", "where $\\theta $ represents the parameters of the DNN and $\\delta \\left(\\cdot \\right)$ represents the Kronecker delta function. $s_{k}^{\\left(n\\right)}$ represents that the speaker label for segment $n$ is $s_{k}$. After the DNN is trained, the utterance-level embeddings, $\\mathbf {e}_{d}$, are normally extracted from the output of the affine component that is with or without the nonlinear activation function applied of one hidden layer in the third block of the DNN BIBREF11, BIBREF12." ], [ "After the speaker embeddings are extracted, they are used to train a PLDA model that yields the score (distance) between speaker embeddings. The training of the PLDA models aims to maximise the between-speaker difference and minimise the within-speaker variation, typically using expectation maximisation (EM). A number of variants of PLDA models have been introduced into the speaker verification task based on this “standard\" PLDA BIBREF5: two-covariance PLDA BIBREF21 and heavy-tailed PLDA BIBREF6. The variant implemented in the Kaldi toolkit BIBREF19, and used in this work, follows BIBREF22 and is similar to the two-covariance model. This model can be written as", "", "where $\\mathbf {e}$ is the speaker embedding. The vector $\\mathbf {y}$ represents the underlying speaker vector and $\\mu $ represents its mean. $\\mathbf {z}$ is the Gaussian noise vector. For speaker verification tasks, estimation of this PLDA model can be performed by estimating the between-speaker covariance matrix, $\\Gamma $, and within-speaker covariance matrix, $\\Lambda $, using the EM algorithm.", "PLDA is a powerful approach to classifying speakers given a large amounts of training data with speaker labels BIBREF23, BIBREF24, BIBREF25. However, large amounts of labelled training data may not be available in the domain of interest such as the one considered in this paper, the non-native speaker verification. One approach to alleviate this problem is to do adaptation from a pre-trained out-of-domain model to the target domain. There are a number of methods for adapting the PLDA model in both supervised and unsupervised manners BIBREF26, BIBREF25. The Kaldi toolkit implements an unsupervised adaptation method which does not require knowledge of speaker labels BIBREF19. This method aims at adapting $\\Gamma $ and $\\Lambda $ of the out-of-domain PLDA model to better match the total covariance of the in-domain adaptation data." ], [ "The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios. The first section (A) contains eight questions about the candidate and their work. The second section (B) is a read-aloud section in which the candidates are asked to read eight sentences. The last three sections (C, D and E) have longer utterances of spontaneous speech elicited by prompts. In section C the candidates are asked to talk for one minute about a prompted business related topic. In section D, the candidate has one minute to describe a business situation illustrated in graphs or charts, such as pie or bar charts. The prompt for section E asks the candidate to imagine they are in a specific conversation and to respond to questions they may be asked in that situation (e.g. advice about planning a conference). This section is made up of 5x 20 seconds responses.", "Each section is scored between 0 and 6; the overall score is therefore between 0 and 30. This score is then mapped into Common European Framework of Reference (CEFR) BIBREF28 language proficiency levels, which is an international standard for describing language ability on a six-level scale. Each candidate is finally assigned a “grade\", ranging from minimal (A1) and basic (A2) command, through limited but effective (B1) and generally effective (B2) command, to good operational (C1) and fully operational (C2) command of the spoken language.", "In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability." ], [ "A set of 8,480 candidates from BULATS was used for training. The approximately 280 hours of speech covers a wide range of more than 70 different L1s. There are 15 major L1s with more than 100 candidates for each, including Tamil, Gujarati, Hindi, Telugu, Malayalam, Bengali, Spanish, Russian, Kannada, Portuguese, French, etc. Data augmentation was applied to the training set, and each recording was processed with a randomly selected source from “babble\", “music\", “noise\" and “reverb\" BIBREF12, which roughly doubled the size of the original training set. Another set of 8,318 BULATS candidates was used as one test set to evaluate the system performance. There are 7 major L1s in this set, each of which has more than 100 candidates: Spanish, Thai, Tamil, Arabic, Vietnamese, Polish and Dutch. There are no overlapping candidates between the BULATS training and test sets. The other test set of 2,540 candidates came from the Linguaskill test, of which there are 6 major L1s each with more than 100 candidates: Hindi, Portuguese, Japanese, Spanish, Thai and Vietnamese. Each of the training set and two test sets was fairly gender balanced, with approximately one third of candidates graded as B1, one third graded as B2, and the rest graded as A1, A2, C1, or C2, according to CEFR ability levels. For each test set candidate, responses from sections A and B were used for speaker enrolment (approximately 180s), while the more challenging free-speaking sections C, D, and E were used for whole section-level verification (approximately 60s for each section)." ], [ "Gender is generally considered an important speaker attribute, and impostor trials were first selected from the same gender group as the reference speaker, as commonly done in standard speaker verification tasks. This resulted in a total of 104.8 million verification trials for the BULATS test set and 9.7 million trials for the Linguaskill test set.", "An i-vector/PLDA system and an x-vector/PLDA system were first trained on the “in-domain\" BULATS training set. For the i-vector system, 13-dimensional perceptual linear predictive (PLP) features were extracted using the HTK toolkit BIBREF29 with a frame-length of 25ms. A UBM of 2,048 mixture components was first trained with full-covariance matrices, and then 600-dimensional i-vectors were extracted for both training and test sets. For the x-vector system, 40-dimensional filterbank features were also extracted using HTK with a frame-length of 25ms. DNN configurations were the same as used in BIBREF12, and 512-dimensional x-vectors were extracted from the affine component of the segment-level layer immediately following the statistics pooling layer.", "Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets.", "In addition to the models trained on the BULATS data, it is also interesting to investigate the application of “out-of-the-box\" models for standard speaker verification tasks to this non-native speaker verification task as there is limited amounts of non-native learner English data that is publicly available. In this paper, the Kaldi-released BIBREF19 VoxCeleb x-vector/PLDA system was used as imported models, which was trained on augmented VoxCeleb 1 BIBREF17 and VoxCeleb 2 BIBREF18. There are more than 7,000 speakers in the VoxCeleb dataset with more than 2,000 hours of audio data, making it the largest publicly available speaker recognition dataset. 30 dimensional mel-frequency cepstral coefficients (MFCCs) were used as input features and system configurations were the same as the BULATS x-vector/PLDA one. It can be seen from Table TABREF10 that these out-of-domain models gave worse performance than baseline systems trained on a far smaller amount of BULATS data due to domain mismatch. Thus, two kinds of in-domain adaptation strategies were explored to make use of the BULATS training set: PLDA adaptation and x-vector extractor fine-tuning. For PLDA adaptation, x-vectors of the BULATS training set were first extracted using the VoxCeleb-trained x-vector extractor, and then employed to adapt the VoxCeleb-trained PLDA model with their mean and variance. For x-vector extractor fine-tuning, with all other layers of the VoxCeleb-trained model kept still, the output layer was re-initialised using the BULATS training set with the number of targets adjusted accordingly, and then all layers were fine-tuned on the BULATS training set. Here the PLDA adaptation system is referred to as X1 and the extractor fine-tuning system is referred to as X2. Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively “in-domain\" extractor prior to the PLDA back-end.", "Detection Error Tradeoff (DET) curves of the four x-vector/PLDA systems on the BULATS test set were illustrated in Figure FIGREF11. It can be seen that, both adaptation systems outperformed the original VoxCeleb-trained system in any threshold of the false alarm (FA) probability and the miss (MS) probability. The extractor fine-tuning system only gave higher MS probability than the PLDA adapted one with FA probability below 0.4%, while for a large range of FA probabilities above 0.4%, the extractor fine-tuning system outperformed the PLDA adapted one.", "Furthermore, by leveraging the large-scale VoxCeleb dataset, both adaptation systems produced lower EERs than baseline systems solely trained on BULATS data, especially the extractor fine-tuning one, which gave a reduction rate of 26$\\%$ in EER over the baseline x-vector/PLDA system on the BULATS test set. It can also be seen from Figure FIGREF11 that, the extractor fine-tuning system gave consistently better performance than the baseline systems for almost any threshold of FA and MS." ], [ "As mentioned in Section SECREF8, gender is an important attribute when selecting impostors. For the non-native English speech data considered in this work, there are two additional attributes that may significantly impact performance, the candidate speaking ability (grade) and L1. In this section, the impact of both attributes on verification performance is analysed on the BULATS test set using the extractor fine-tuning system (X2) detailed in Section SECREF8 with impostors selected from the same gender group as the reference speaker. Taking EER as the operating threshold, both grade and L1 breakdown are investigated with respect to the number of impostor trials resulting in false alarm (FA) errors.", "As there were only a small number of speakers graded as C1 or C2 in the BULATS test set, the two grade groups were merged into one group as C in the following analysis. Also for a fair comparison, 200 speakers were randomly selected (roughly gender balanced) for each grade group from the BULATS test set, and the grade breakdown is shown in Table TABREF13. For lower grades, impostor trials from the grade group of A1 dominated FA errors as A1 speakers tend to speak short utterances, which is more challenging for the systems. For higher grades (B2 and C), impostor trials from the grade group of C constituted a larger portion of FA errors probably due to the fact that C speakers tend to speak long utterances in a more “native\" way and they are also similar to B2 speakers.", "", "The numbers of speakers from different L1 groups also varied in the BULATS test set. For a fair comparison, 200 speakers were randomly selected (roughly gender balanced) for each of 6 major L1s. The L1 breakdown is shown in Table TABREF14, where impostor trials from the same L1 group as the reference speaker generally dominated FA errors. English learners from the same L1 group tend to have similar accents when speaking English, which makes them more confusable to speaker verification systems compared to learners from a different L1 group. Particularly, impostors of Thai L1 constitute a considerable portion of FA errors for each L1, as A1 and A2 speakers dominate Thai L1 in the BULATS test set, which is different from other L1s where B1 and B2 speakers dominate." ], [ "Based on the analysis in the previous section, the impact of speaker attributes beyond gender, the grade and L1, were used as additional restrictions on the imposter set selection. The following forms of impostor selection were examined:", "gender, impostors from the same gender group as the reference speaker, as in Section SECREF8;", "", "grade, impostors from the same grade group as the reference speaker;", "$>$grade, impostors from higher grade groups than the reference speaker if the grade of the reference speaker is lower than C, otherwise from C; this case is of practical interest for impersonation in spoken language tests;", "L1, impostors from the same L1 group as the reference speaker;", "The number of total verification trials decreases with further restriction on impostors, which is shown in Table TABREF20. Table TABREF21 shows the impact on EER of restricting the possible set of impostors according to gender, L1 or grade on both BULATS and Linguaskill test sets. Due to the lack of data for each L1 or grade, X1 and X2 systems that are adapted or fine-tuned on all of the BULATS training set are used for verification. As expected, restricting possible impostors according to speaker attributes yielded higher EERs as the percentage of impostors “close\" to the reference speaker increased. Take gender as the starting point, which is the configuration used in previous experiments in Section SECREF8. Further restricting the set of impostors to L1 again increased EERs agreeing with the results shown in Table TABREF14, similarly to grade. An interesting result in terms of handling impersonation is that, if the set of impostors is further restricted to $>$grade, EERs decrease compared to simply restricted to gender. The highest EER for both systems was achieved by restricted to gender+L1+grade, which indicates that all these are important speaker attributes of non-native data. The gender+L1+$>$grade case is more related to practical scenarios of impersonation, since it is more likely that a candidate chooses a substitute from the same gender and L1 group but speak the target language better to impersonate him/herself in order to obtain a higher grade in a spoken language test.", "", "", "For the impersonation scenario where the impostor trials are restricted to gender+L1+$>$grade, the DET curves for all systems including the unadapted VoxCeleb and BULATS trained systems are shown in Figure FIGREF22 for the BULATS test set. This allows the overall distribution of FA and MS errors for the aforementioned systems to be evaluated. It can be seen that, compared with the fine-tuned X2 system, the PLDA-adapted X1 system had a lower MS probability when the FA probability was low and had a higher MS probability when the FA probability was high. This implies that the X1 system tends to accept imposters as reference speakers while the X2 system tends to reject reference speakers as impostors. For malpractice candidate impersonation in spoken language tests, the X2 system may have a high cost as it may incorrectly identify malpractice in valid candidates. This would require manual checks to confirm this classification. In contrast, the X1 system may result in a lower level of security because it has a higher chance of misidentifying the candidate who is impersonating another. Based on these complementary trends, a score-level linear combination of the two systems was performed with weights of 0.7 and 0.3 for X1 and X2 systems, respectively. The combination system gave consistently better performance for a wide range of FA and MS probabilities than the aforementioned systems with an EER of 0.58% on the BULATS test set, as demonstrated in Figure FIGREF22. The same trend was also observed at these weightings on the Linguaskill test set with an EER of 0.72% for the combination system, approximately 8% relative reduction in EER from the X1 system. Thus, the combination of the two adapted systems making use of both large-scale VoxCeleb data and in-domain BULATS data, can serve as a sensible configuration for impersonation detection in spoken language tests.", "", "" ], [ "This paper has investigated malpractice in the form of candidate impersonation for spoken language assessment. This task has close relationships to standard speaker verification, but applied to the domain of non-native speech. Advanced neural network based speaker verification systems were built on both limited non-native spoken English data from the BULATS test, and a large standard corpus VoxCeleb. For the configuration used all systems yielded relatively low EERs of less than 1%. Though built with only limited data the systems trained on just BULATS systems outperformed the “out-of-the-box\" VoxCeleb based system. However by adapting both the PLDA model and the deep speaker representation, the VoxCeleb-based systems could yield lower EERs. The attributes of the “impostors\" was then analysed in terms of both the impostor's grade and L1. As expected, L1 was the most important attribute of the impostor selected, though the grade did also influence performance. With the most likely scenario of impersonation by restricting impostors to be from the same gender, same L1, and higher grade group, the combination of the two adapted systems gave consistently better performance for a wide range of FA and MS probabilities, making it a sensible configuration for impersonation detection." ] ], "section_name": [ "Introduction", "Speaker Verification Systems", "Speaker Verification Systems ::: Deep neural network embedding extractor", "Speaker Verification Systems ::: PLDA classifier and adaptation", "Non-native Spoken English Corpora", "Experimental Setup", "Experimental results ::: Baseline system performance", "Experimental results ::: Impostor attributes analysis", "Experimental results ::: Overall system performance", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "e194de11e39aa308d74523cbeb863f6fef17f5da" ], "answer": [ { "evidence": [ "The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios. The first section (A) contains eight questions about the candidate and their work. The second section (B) is a read-aloud section in which the candidates are asked to read eight sentences. The last three sections (C, D and E) have longer utterances of spontaneous speech elicited by prompts. In section C the candidates are asked to talk for one minute about a prompted business related topic. In section D, the candidate has one minute to describe a business situation illustrated in graphs or charts, such as pie or bar charts. The prompt for section E asks the candidate to imagine they are in a specific conversation and to respond to questions they may be asked in that situation (e.g. advice about planning a conference). This section is made up of 5x 20 seconds responses.", "In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability." ], "extractive_spans": [ "non-native speech from the BULATS test " ], "free_form_answer": "", "highlighted_evidence": [ "The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios. ", "In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "468b166ba81d8f16c2afae7c6b92b1ed1f339e7b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2. % EER performance of VoxCeleb-based systems on BULATS and Linguaskill test sets.", "FLOAT SELECTED: Table 1. % EER performance of BULATS-trained baseline systems on BULATS and Linguaskill test sets.", "Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets.", "In addition to the models trained on the BULATS data, it is also interesting to investigate the application of “out-of-the-box\" models for standard speaker verification tasks to this non-native speaker verification task as there is limited amounts of non-native learner English data that is publicly available. In this paper, the Kaldi-released BIBREF19 VoxCeleb x-vector/PLDA system was used as imported models, which was trained on augmented VoxCeleb 1 BIBREF17 and VoxCeleb 2 BIBREF18. There are more than 7,000 speakers in the VoxCeleb dataset with more than 2,000 hours of audio data, making it the largest publicly available speaker recognition dataset. 30 dimensional mel-frequency cepstral coefficients (MFCCs) were used as input features and system configurations were the same as the BULATS x-vector/PLDA one. It can be seen from Table TABREF10 that these out-of-domain models gave worse performance than baseline systems trained on a far smaller amount of BULATS data due to domain mismatch. Thus, two kinds of in-domain adaptation strategies were explored to make use of the BULATS training set: PLDA adaptation and x-vector extractor fine-tuning. For PLDA adaptation, x-vectors of the BULATS training set were first extracted using the VoxCeleb-trained x-vector extractor, and then employed to adapt the VoxCeleb-trained PLDA model with their mean and variance. For x-vector extractor fine-tuning, with all other layers of the VoxCeleb-trained model kept still, the output layer was re-initialised using the BULATS training set with the number of targets adjusted accordingly, and then all layers were fine-tuned on the BULATS training set. Here the PLDA adaptation system is referred to as X1 and the extractor fine-tuning system is referred to as X2. Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively “in-domain\" extractor prior to the PLDA back-end." ], "extractive_spans": [], "free_form_answer": "BULATS i-vector/PLDA\nBULATS x-vector/PLDA\nVoxCeleb x-vector/PLDA\nPLDA adaptation (X1)\n Extractor fine-tuning (X2) ", "highlighted_evidence": [ "FLOAT SELECTED: Table 2. % EER performance of VoxCeleb-based systems on BULATS and Linguaskill test sets.", "FLOAT SELECTED: Table 1. % EER performance of BULATS-trained baseline systems on BULATS and Linguaskill test sets.", "Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets.", "Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively “in-domain\" extractor prior to the PLDA back-end." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "zero", "zero" ], "paper_read": [ "no", "no" ], "question": [ "What standard large speaker verification corpora is used for evaluation?", "What systems are tested?" ], "question_id": [ "3241f90a03853fa85d287007d2d51e7843ee3d9b", "52e8f79814736fea96fd9b642881b476243e1698" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Fig. 1. Diagram of an automatic spoken language assessment system.", "Table 2. % EER performance of VoxCeleb-based systems on BULATS and Linguaskill test sets.", "Table 1. % EER performance of BULATS-trained baseline systems on BULATS and Linguaskill test sets.", "Fig. 2. DET curves of the four x-vector/PLDA systems on the BULATS test set with impostors from the same gender group as the reference speaker.", "Table 4. L1 breakdown of the percentage of impostor trials with FA errors at the operating threshold of EER for the extractor fine-tuning system on a subset of the BULATS test set.", "Table 3. Grade breakdown of the percentage of impostor trials with FA errors at the operating threshold of EER for the extractor fine-tuning system on a subset of the BULATS test set.", "Table 5. Number of verification trials (in millions) with different restrictions on impostors for both BULATS and Linguaskill test sets.", "Fig. 3. DET curves of various systems on the BULATS test set with impostor trials selected from the group of the same gender, same L1 and higher grade as/than the reference speaker.", "Table 6. % EER performance of two adapted systems with different restrictions on impostors on both BULATS and Linguaskill test sets." ], "file": [ "1-Figure1-1.png", "4-Table2-1.png", "4-Table1-1.png", "4-Figure2-1.png", "5-Table4-1.png", "5-Table3-1.png", "6-Table5-1.png", "6-Figure3-1.png", "6-Table6-1.png" ] }
[ "What systems are tested?" ]
[ [ "1909.13695-4-Table1-1.png", "1909.13695-Experimental results ::: Baseline system performance-2", "1909.13695-Experimental results ::: Baseline system performance-3", "1909.13695-4-Table2-1.png" ] ]
[ "BULATS i-vector/PLDA\nBULATS x-vector/PLDA\nVoxCeleb x-vector/PLDA\nPLDA adaptation (X1)\n Extractor fine-tuning (X2) " ]
532
1601.01705
Learning to Compose Neural Networks for Question Answering
We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains.
{ "paragraphs": [ [ "This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and structured knowledge bases. The model translates from questions to dynamically assembled neural networks, then applies these networks to world representations (images or knowledge bases) to produce answers. We take advantage of two largely independent lines of work: on one hand, an extensive literature on answering questions by mapping from strings to logical representations of meaning; on the other, a series of recent successes in deep neural models for image recognition and captioning. By constructing neural networks instead of logical forms, our model leverages the best aspects of both linguistic compositionality and continuous representations.", "Our model has two components, trained jointly: first, a collection of neural “modules” that can be freely composed (fig:teasera); second, a network layout predictor that assembles modules into complete deep networks tailored to each question (fig:teaserb). Previous work has used manually-specified modular structures for visual learning BIBREF0 . Here we:", "Training data consists of (world, question, answer) triples: our approach requires no supervision of network layouts. We achieve state-of-the-art performance on two markedly different question answering tasks: one with questions about natural images, and another with more compositional questions about United States geography." ], [ "We begin with a high-level discussion of the kinds of composed networks we would like to learn.", "Andreas15NMN describe a heuristic approach for decomposing visual question answering tasks into sequence of modular sub-problems. For example, the question What color is the bird? might be answered in two steps: first, “where is the bird?” (fig:examplesa), second, “what color is that part of the image?” (fig:examplesc). This first step, a generic module called find, can be expressed as a fragment of a neural network that maps from image features and a lexical item (here bird) to a distribution over pixels. This operation is commonly referred to as the attention mechanism, and is a standard tool for manipulating images BIBREF1 and text representations BIBREF2 .", "The first contribution of this paper is an extension and generalization of this mechanism to enable fully-differentiable reasoning about more structured semantic representations. fig:examplesb shows how the same module can be used to focus on the entity Georgia in a non-visual grounding domain; more generally, by representing every entity in the universe of discourse as a feature vector, we can obtain a distribution over entities that corresponds roughly to a logical set-valued denotation.", "Having obtained such a distribution, existing neural approaches use it to immediately compute a weighted average of image features and project back into a labeling decision—a describe module (fig:examplesc). But the logical perspective suggests a number of novel modules that might operate on attentions: e.g. combining them (by analogy to conjunction or disjunction) or inspecting them directly without a return to feature space (by analogy to quantification, fig:examplesd). These modules are discussed in detail in sec:model. Unlike their formal counterparts, they are differentiable end-to-end, facilitating their integration into learned models. Building on previous work, we learn behavior for a collection of heterogeneous modules from (world, question, answer) triples.", "The second contribution of this paper is a model for learning to assemble such modules compositionally. Isolated modules are of limited use—to obtain expressive power comparable to either formal approaches or monolithic deep networks, they must be composed into larger structures. fig:examples shows simple examples of composed structures, but for realistic question-answering tasks, even larger networks are required. Thus our goal is to automatically induce variable-free, tree-structured computation descriptors. We can use a familiar functional notation from formal semantics (e.g. Liang et al., 2011) to represent these computations. We write the two examples in fig:examples as", " (describe[color] find[bird])", "and", " (exists find[state])", "respectively. These are network layouts: they specify a structure for arranging modules (and their lexical parameters) into a complete network. Andreas15NMN use hand-written rules to deterministically transform dependency trees into layouts, and are restricted to producing simple structures like the above for non-synthetic data. For full generality, we will need to solve harder problems, like transforming What cities are in Georgia? (fig:teaser) into", " (and", " find[city]", " (relate[in] lookup[Georgia]))", "In this paper, we present a model for learning to select such structures from a set of automatically generated candidates. We call this model a dynamic neural module network." ], [ "There is an extensive literature on database question answering, in which strings are mapped to logical forms, then evaluated by a black-box execution model to produce answers. Supervision may be provided either by annotated logical forms BIBREF3 , BIBREF4 , BIBREF5 or from (world, question, answer) triples alone BIBREF6 , BIBREF7 . In general the set of primitive functions from which these logical forms can be assembled is fixed, but one recent line of work focuses on inducing new predicates functions automatically, either from perceptual features BIBREF8 or the underlying schema BIBREF9 . The model we describe in this paper has a unified framework for handling both the perceptual and schema cases, and differs from existing work primarily in learning a differentiable execution model with continuous evaluation results.", "Neural models for question answering are also a subject of current interest. These include approaches that model the task directly as a multiclass classification problem BIBREF10 , models that attempt to embed questions and answers in a shared vector space BIBREF11 and attentional models that select words from documents sources BIBREF2 . Such approaches generally require that answers can be retrieved directly based on surface linguistic features, without requiring intermediate computation. A more structured approach described by Yin15NeuralTable learns a query execution model for database tables without any natural language component. Previous efforts toward unifying formal logic and representation learning include those of Grefenstette13Logic, Krishnamurthy13CompVector, Lewis13DistributionalLogical, and Beltagy13Markov.", "The visually-grounded component of this work relies on recent advances in convolutional networks for computer vision BIBREF12 , and in particular the fact that late convolutional layers in networks trained for image recognition contain rich features useful for other vision tasks while preserving spatial information. These features have been used for both image captioning BIBREF1 and visual QA BIBREF13 .", "Most previous approaches to visual question answering either apply a recurrent model to deep representations of both the image and the question BIBREF14 , BIBREF15 , or use the question to compute an attention over the input image, and then answer based on both the question and the image features attended to BIBREF13 , BIBREF16 . Other approaches include the simple classification model described by Zhou15ClassVQA and the dynamic parameter prediction network described by Noh15DPPVQA. All of these models assume that a fixed computation can be performed on the image and question to compute the answer, rather than adapting the structure of the computation to the question.", "As noted, Andreas15NMN previously considered a simple generalization of these attentional approaches in which small variations in the network structure per-question were permitted, with the structure chosen by (deterministic) syntactic processing of questions. Other approaches in this general family include the “universal parser” sketched by Bottou14Reasoning, the graph transformer networks of Bottou97GraphTransformers, the knowledge-based neural networks of Towell94KBNN and the recursive neural networks of Socher13CVG, which use a fixed tree structure to perform further linguistic analysis without any external world representation. We are unaware of previous work that simultaneously learns both parameters for and structures of instance-specific networks." ], [ "Recall that our goal is to map from questions and world representations to answers. This process involves the following variables:", "Our model is built around two distributions: a layout model $p(z|x;\\theta _\\ell )$ which chooses a layout for a sentence, and a execution model $p_z(y|w;\\theta _e)$ which applies the network specified by $z$ to $w$ .", "For ease of presentation, we introduce these models in reverse order. We first imagine that $z$ is always observed, and in sec:model:modules describe how to evaluate and learn modules parameterized by $\\theta _e$ within fixed structures. In sec:model:assemblingNetworks, we move to the real scenario, where $z$ is unknown. We describe how to predict layouts from questions and learn $\\theta _e$ and $\\theta _\\ell $ jointly without layout supervision." ], [ "Given a layout $z$ , we assemble the corresponding modules into a full neural network (fig:teaserc), and apply it to the knowledge representation. Intermediate results flow between modules until an answer is produced at the root. We denote the output of the network with layout $z$ on input world $w$ as $\\llbracket z \\rrbracket _w$ ; when explicitly referencing the substructure of $z$ , we can alternatively write $\\llbracket m(h^1, h^2) \\rrbracket $ for a top-level module $m$ with submodule outputs $h^1$ and $h^2$ . We then define the execution model:", "$$p_z(y|w) = (\\llbracket z \\rrbracket _w)_y$$ (Eq. 13) ", " (This assumes that the root module of $z$ produces a distribution over labels $y$ .) The set of possible layouts $z$ is restricted by module type constraints: some modules (like find above) operate directly on the input representation, while others (like describe above) also depend on input from specific earlier modules. Two base types are considered in this paper are Attention (a distribution over pixels or entities) and Labels (a distribution over answers).", "Parameters are tied across multiple instances of the same module, so different instantiated networks may share some parameters but not others. Modules have both parameter arguments (shown in square brackets) and ordinary inputs (shown in parentheses). Parameter arguments, like the running bird example in sec:programs, are provided by the layout, and are used to specialize module behavior for particular lexical items. Ordinary inputs are the result of computation lower in the network. In addition to parameter-specific weights, modules have global weights shared across all instances of the module (but not shared with other modules). We write $A, a, B, b, \\dots $ for global weights and $u^i, v^i$ for weights associated with the parameter argument $i$ . $\\oplus $ and $\\odot $ denote (possibly broadcasted) elementwise addition and multiplication respectively. The complete set of global weights and parameter-specific weights constitutes $\\theta _e$ . Every module has access to the world representation, represented as a collection of vectors $w^1, w^2, \\dots $ (or $W$ expressed as a matrix). The nonlinearity $\\sigma $ denotes a rectified linear unit.", "The modules used in this paper are shown below, with names and type constraints in the first row and a description of the module's computation following.", "=3mm", "|p0.95|", "Lookup ( $\\rightarrow $ Attention)", "lookup[ $i$ ] produces an attention focused entirely at the index $f(i)$ , where the relationship $f$ between words and positions in the input map is known ahead of time (e.g. string matches on database fields). ", "$$\\llbracket {\\texttt {lookup[i]}} \\rrbracket = e_{f(i)}$$ (Eq. 14) ", "where $e_i$ is the basis vector that is 1 in the $i$ th position and 0 elsewhere.", "Find ( $\\rightarrow $ Attention)", "find[ $i$ ] computes a distribution over indices by concatenating the parameter argument with each position of the input feature map, and passing the concatenated vector through a MLP: ", "$$\\llbracket {\\texttt {find[i]}} \\rrbracket = \\textrm {softmax}(a \\odot \\sigma (B v^i \\oplus C W \\oplus d))$$ (Eq. 15) ", "Relate (Attention $\\rightarrow $ Attention)", "relate directs focus from one region of the input to another. It behaves much like the find module, but also conditions its behavior on the current region of attention $h$ . Let $\\bar{w}(h) = \\sum _k h_k w^k$ , where $h_k$ is the $k^{th}$ element of $h$ . Then, ", "$$& \\llbracket {\\texttt {relate[i]}}(h) \\rrbracket = \\textrm {softmax}(a \\ \\odot \\nonumber \\\\\n&\\qquad \\sigma (B v^i \\oplus C W \\oplus D\\bar{w}(h) \\oplus e))$$ (Eq. 16) ", "And (Attention * $\\rightarrow $ Attention)", " and performs an operation analogous to set intersection for attentions. The analogy to probabilistic logic suggests multiplying probabilities: ", "$$\\llbracket {\\texttt {and}}(h^1, h^2, \\ldots ) \\rrbracket = h^1 \\odot h^2 \\odot \\cdots $$ (Eq. 17) ", "Describe (Attention $\\rightarrow $ Labels)", " describe[ $i$ ] computes a weighted average of $w$ under the input attention. This average is then used to predict an answer representation. With $\\bar{w}$ as above, ", "$$\\llbracket {\\texttt {describe[i]}}(h) \\rrbracket = \\textrm {softmax}(A \\sigma (B \\bar{w}(h) + v^i))$$ (Eq. 18) ", "Exists (Attention $\\rightarrow $ Labels)", " exists is the existential quantifier, and inspects the incoming attention directly to produce a label, rather than an intermediate feature vector like describe: ", "$$\\llbracket {\\texttt {exists]}}(h) \\rrbracket = \\textrm {softmax}\\Big (\\big (\\max _k h_k\\big )a + b\\Big )$$ (Eq. 19) ", "With $z$ observed, the model we have described so far corresponds largely to that of Andreas15NMN, though the module inventory is different—in particular, our new exists and relate modules do not depend on the two-dimensional spatial structure of the input. This enables generalization to non-visual world representations.", "Learning in this simplified setting is straightforward. Assuming the top-level module in each layout is a describe or exists module, the fully- instantiated network corresponds to a distribution over labels conditioned on layouts. To train, we maximize", " $\n\\sum _{(w,y,z)} \\log p_z(y|w;\\theta _e)\n$ directly. This can be understood as a parameter-tying scheme, where the decisions about which parameters to tie are governed by the observed layouts $z$ ." ], [ "Next we describe the layout model $p(z|x;\\theta _\\ell )$ . We first use a fixed syntactic parse to generate a small set of candidate layouts, analogously to the way a semantic grammar generates candidate semantic parses in previous work BIBREF17 .", "A semantic parse differs from a syntactic parse in two primary ways. First, lexical items must be mapped onto a (possibly smaller) set of semantic primitives. Second, these semantic primitives must be combined into a structure that closely, but not exactly, parallels the structure provided by syntax. For example, state and province might need to be identified with the same field in a database schema, while all states have a capital might need to be identified with the correct (in situ) quantifier scope.", "While we cannot avoid the structure selection problem, continuous representations simplify the lexical selection problem. For modules that accept a vector parameter, we associate these parameters with words rather than semantic tokens, and thus turn the combinatorial optimization problem associated with lexicon induction into a continuous one. Now, in order to learn that province and state have the same denotation, it is sufficient to learn that their associated parameters are close in some embedding space—a task amenable to gradient descent. (Note that this is easy only in an optimizability sense, and not an information-theoretic one—we must still learn to associate each independent lexical item with the correct vector.) The remaining combinatorial problem is to arrange the provided lexical items into the right computational structure. In this respect, layout prediction is more like syntactic parsing than ordinary semantic parsing, and we can rely on an off-the-shelf syntactic parser to get most of the way there. In this work, syntactic structure is provided by the Stanford dependency parser BIBREF18 .", "The construction of layout candidates is depicted in fig:layout, and proceeds as follows:", "Represent the input sentence as a dependency tree.", "Collect all nouns, verbs, and prepositional phrases that are attached directly to a wh-word or copula.", "Associate each of these with a layout fragment: Ordinary nouns and verbs are mapped to a single find module. Proper nouns to a single lookup module. Prepositional phrases are mapped to a depth-2 fragment, with a relate module for the preposition above a find module for the enclosed head noun.", "Form subsets of this set of layout fragments. For each subset, construct a layout candidate by joining all fragments with an and module, and inserting either a measure or describe module at the top (each subset thus results in two parse candidates.)", "All layouts resulting from this process feature a relatively flat tree structure with at most one conjunction and one quantifier. This is a strong simplifying assumption, but appears sufficient to cover most of the examples that appear in both of our tasks. As our approach includes both categories, relations and simple quantification, the range of phenomena considered is generally broader than previous perceptually-grounded QA work BIBREF8 , BIBREF19 .", "Having generated a set of candidate parses, we need to score them. This is a ranking problem; as in the rest of our approach, we solve it using standard neural machinery. In particular, we produce an LSTM representation of the question, a feature-based representation of the query, and pass both representations through a multilayer perceptron (MLP). The query feature vector includes indicators on the number of modules of each type present, as well as their associated parameter arguments. While one can easily imagine a more sophisticated parse-scoring model, this simple approach works well for our tasks.", "Formally, for a question $x$ , let $h_q(x)$ be an LSTM encoding of the question (i.e. the last hidden layer of an LSTM applied word-by-word to the input question). Let $\\lbrace z_1, z_2, \\ldots \\rbrace $ be the proposed layouts for $x$ , and let $f(z_i)$ be a feature vector representing the $i$ th layout. Then the score $s(z_i|x)$ for the layout $z_i$ is ", "$$s(z_i|x) = a^\\top \\sigma (B h_q(x) + C f(z_i) + d)$$ (Eq. 26) ", "i.e. the output of an MLP with inputs $h_q(x)$ and $f(z_i)$ , and parameters $\\theta _\\ell = \\lbrace a, B, C, d\\rbrace $ . Finally, we normalize these scores to obtain a distribution: ", "$$p(z_i|x;\\theta _\\ell ) = e^{s(z_i|x)} \\Big / \\sum _{j=1}^n e^{s(z_j|x)}$$ (Eq. 27) ", "Having defined a layout selection module $p(z|x;\\theta _\\ell )$ and a network execution model $p_z(y|w;\\theta _e)$ , we are ready to define a model for predicting answers given only (world, question) pairs. The key constraint is that we want to minimize evaluations of $p_z(y|w;\\theta _e)$ (which involves expensive application of a deep network to a large input representation), but can tractably evaluate $p(z|x;\\theta _\\ell )$ for all $z$ (which involves application of a shallow network to a relatively small set of candidates). This is the opposite of the situation usually encountered semantic parsing, where calls to the query execution model are fast but the set of candidate parses is too large to score exhaustively.", "In fact, the problem more closely resembles the scenario faced by agents in the reinforcement learning setting (where it is cheap to score actions, but potentially expensive to execute them and obtain rewards). We adopt a common approach from that literature, and express our model as a stochastic policy. Under this policy, we first sample a layout $z$ from a distribution $p(z|x;\\theta _\\ell )$ , and then apply $z$ to the knowledge source and obtain a distribution over answers $p(y|z,w;\\theta _e)$ .", "After $z$ is chosen, we can train the execution model directly by maximizing $\\log p(y|z,w;\\theta _e)$ with respect to $\\theta _e$ as before (this is ordinary backpropagation). Because the hard selection of $z$ is non-differentiable, we optimize $p(z|x;\\theta _\\ell )$ using a policy gradient method. The gradient of the reward surface $J$ with respect to the parameters of the policy is ", "$$\\nabla J(\\theta _\\ell ) = \\mathbb {E}[ \\nabla \\log p(z|x;\\theta _\\ell ) \\cdot r ]$$ (Eq. 28) ", "(this is the reinforce rule BIBREF20 ). Here the expectation is taken with respect to rollouts of the policy, and $r$ is the reward. Because our goal is to select the network that makes the most accurate predictions, we take the reward to be identically the negative log-probability from the execution phase, i.e. ", "$$\\mathbb {E}[(\\nabla \\log p(z|x;\\theta _\\ell )) \\cdot \\log p(y|z,w;\\theta _e)]$$ (Eq. 29) ", "Thus the update to the layout-scoring model at each timestep is simply the gradient of the log-probability of the chosen layout, scaled by the accuracy of that layout's predictions. At training time, we approximate the expectation with a single rollout, so at each step we update $\\theta _\\ell $ in the direction $\n(\\nabla \\log p(z|x;\\theta _\\ell )) \\cdot \\log p(y|z,w;\\theta _e)\n$ for a single $z \\sim p(z|x;\\theta _\\ell )$ . $\\theta _e$ and $\\theta _\\ell $ are optimized using adadelta BIBREF21 with $\\rho =0.95,$ $\\varepsilon =1\\mathrm {e}{-6}$ and gradient clipping at a norm of 10." ], [ "The framework described in this paper is general, and we are interested in how well it performs on datasets of varying domain, size and linguistic complexity. To that end, we evaluate our model on tasks at opposite extremes of both these criteria: a large visual question answering dataset, and a small collection of more structured geography questions." ], [ "Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 . The VQA dataset consists of more than 200,000 images paired with human-annotated questions and answers, as in fig:vqa:qualitative-results.", "We use the VQA 1.0 release, employing the development set for model selection and hyperparameter tuning, and reporting final results from the evaluation server on the test-standard set. For the experiments described in this section, the input feature representations $w_i$ are computed by the the fifth convolutional layer of a 16-layer VGGNet after pooling BIBREF12 . Input images are scaled to 448 $\\times $ 448 before computing their representations. We found that performance on this task was best if the candidate layouts were relatively simple: only describe, and and find modules are used, and layouts contain at most two conjuncts.", "One weakness of this basic framework is a difficulty modeling prior knowledge about answers (of the form most bears are brown). This kinds of linguistic “prior” is essential for the VQA task, and easily incorporated. We simply introduce an extra hidden layer for recombining the final module network output with the input sentence representation $h_q(x)$ (see eq:layout-score), replacing eq:simple-execution with: ", "$$\\log p_z(y|w,x) = (A h_q(x) + B \\llbracket z \\rrbracket _w)_y$$ (Eq. 32) ", "(Now modules with output type Labels should be understood as producing an answer embedding rather than a distribution over answers.) This allows the question to influence the answer directly.", "Results are shown in tbl:vqa:quantitative-results. The use of dynamic networks provides a small gain, most noticeably on \"other\" questions. We achieve state-of-the-art results on this task, outperforming a highly effective visual bag-of-words model BIBREF23 , a model with dynamic network parameter prediction (but fixed network structure) BIBREF24 , a more conventional attentional model BIBREF13 , and a previous approach using neural module networks with no structure prediction BIBREF0 .", "Some examples are shown in fig:vqa:qualitative-results. In general, the model learns to focus on the correct region of the image, and tends to consider a broad window around the region. This facilitates answering questions like Where is the cat?, which requires knowledge of the surroundings as well as the object in question." ], [ "The next set of experiments we consider focuses on GeoQA, a geographical question-answering task first introduced by Krish2013Grounded. This task was originally paired with a visual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical semantic parser backed by a database, and another which induces logical predicates using linear classifiers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical semantics, as well as strictly logical approaches.", "The GeoQA domain consists of a set of entities (e.g. states, cities, parks) which participate in various relations (e.g. north-of, capital-of). Here we take the world representation to consist of two pieces: a set of category features (used by the find module) and a different set of relational features (used by the relate module). For our experiments, we use a subset of the features originally used by Krishnamurthy et al. The original dataset includes no quantifiers, and treats the questions What cities are in Texas? and Are there any cities in Texas? identically. Because we are interested in testing the parser's ability to predict a variety of different structures, we introduce a new version of the dataset, GeoQA+Q, which distinguishes these two cases, and expects a Boolean answer to questions of the second kind.", "Results are shown in tbl:geo:quantitative. As in the original work, we report the results of leave-one-environment-out cross-validation on the set of 10 environments. Our dynamic model (D-NMN) outperforms both the logical (LSP-F) and perceptual models (LSP-W) described by BIBREF8 , as well as a fixed-structure neural module net (NMN). This improvement is particularly notable on the dataset with quantifiers, where dynamic structure prediction produces a 20% relative improvement over the fixed baseline. A variety of predicted layouts are shown in fig:geo:qualitative." ], [ "We have introduced a new model, the dynamic neural module network, for answering queries about both structured and unstructured sources of information. Given only (question, world, answer) triples as training data, the model learns to assemble neural networks on the fly from an inventory of neural models, and simultaneously learns weights for these modules so that they can be composed into novel structures. Our approach achieves state-of-the-art results on two tasks. We believe that the success of this work derives from two factors:", "Continuous representations improve the expressiveness and learnability of semantic parsers: by replacing discrete predicates with differentiable neural network fragments, we bypass the challenging combinatorial optimization problem associated with induction of a semantic lexicon. In structured world representations, neural predicate representations allow the model to invent reusable attributes and relations not expressed in the schema. Perhaps more importantly, we can extend compositional question-answering machinery to complex, continuous world representations like images.", "Semantic structure prediction improves generalization in deep networks: by replacing a fixed network topology with a dynamic one, we can tailor the computation performed to each problem instance, using deeper networks for more complex questions and representing combinatorially many queries with comparatively few parameters. In practice, this results in considerable gains in speed and sample efficiency, even with very little training data.", "These observations are not limited to the question answering domain, and we expect that they can be applied similarly to tasks like instruction following, game playing, and language generation." ], [ "JA is supported by a National Science Foundation Graduate Fellowship. MR is supported by a fellowship within the FIT weltweit-Program of the German Academic Exchange Service (DAAD). This work was additionally supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Vision and Learning Center." ] ], "section_name": [ "Introduction", "Deep networks as functional programs", "Related work", "Model", "Evaluating modules", "Assembling networks", "Experiments", "Questions about images", "Questions about geography", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "46e5ea281159cbd3c2de92d958a1f2a835593916" ], "answer": [ { "evidence": [ "Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 . The VQA dataset consists of more than 200,000 images paired with human-annotated questions and answers, as in fig:vqa:qualitative-results.", "The next set of experiments we consider focuses on GeoQA, a geographical question-answering task first introduced by Krish2013Grounded. This task was originally paired with a visual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical semantic parser backed by a database, and another which induces logical predicates using linear classifiers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical semantics, as well as strictly logical approaches." ], "extractive_spans": [], "free_form_answer": "VQA and GeoQA", "highlighted_evidence": [ "Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 .", "The next set of experiments we consider focuses on GeoQA, a geographical question-answering task first introduced by Krish2013Grounded." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] } ], "nlp_background": [ "two" ], "paper_read": [ "no" ], "question": [ "What benchmark datasets they use?" ], "question_id": [ "5712a0b1e33484ebc6d71c70ae222109c08dede2" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "Question Answering" ], "topic_background": [ "familiar" ] }
{ "caption": [ "Figure 1: A learned syntactic analysis (a) is used to assemble a collection of neural modules (b) into a deep neural network (c), and applied to a world representation (d) to produce an answer.", "Figure 2: Simple neural module networks, corresponding to the questions What color is the bird? and Are there any states? (a) A neural find module for computing an attention over pixels. (b) The same operation applied to a knowledge base. (c) Using an attention produced by a lower module to identify the color of the region of the image attended to. (d) Performing quantification by evaluating an attention directly.", "Figure 3: Generation of layout candidates. The input sentence (a) is represented as a dependency parse (b). Fragments of this dependency parse are then associated with appropriate modules (c), and these fragments are assembled into full layouts (d).", "Table 1: Results on the VQA test server. NMN is the parameter-tying model from Andreas et al. (2015), and D-NMN is the model described in this paper.", "Figure 4: Sample outputs for the visual question answering task. The second row shows the final attention provided as input to the top-level describe module. For the first two examples, the model produces reasonable parses, attends to the correct region of the images (the ear and the woman’s clothing), and generates the correct answer. In the third image, the verb is discarded and a wrong answer is produced.", "Table 2: Results on the GeoQA dataset, and the GeoQA dataset with quantification. Our approach outperforms both a purely logical model (LSP-F) and a model with learned perceptual predicates (LSP-W) on the original dataset, and a fixedstructure NMN under both evaluation conditions.", "Figure 5: Example layouts and answers selected by the model on the GeoQA dataset. For incorrect predictions, the correct answer is shown in parentheses." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "5-Figure3-1.png", "7-Table1-1.png", "7-Figure4-1.png", "8-Table2-1.png", "8-Figure5-1.png" ] }
[ "What benchmark datasets they use?" ]
[ [ "1601.01705-Questions about images-0", "1601.01705-Questions about geography-0" ] ]
[ "VQA and GeoQA" ]
535
1910.08772
MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity
We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-known (surface-level) monotonicity facts about quantifiers, lexical items and tokenlevel polarity information. Despite its simplicity, we find our approach to be competitive with other logic-based NLI models on the SICK benchmark. We also use MonaLog in combination with the current state-of-the-art model BERT in a variety of settings, including for compositional data augmentation. We show that MonaLog is capable of generating large amounts of high-quality training data for BERT, improving its accuracy on SICK.
{ "paragraphs": [ [ "There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) BIBREF5, BIBREF6, as well as finding systematic biases in benchmark datasets BIBREF7, BIBREF8. In parallel to these efforts, there have also been recent logic-based approaches to NLI BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, which take inspiration from linguistics. In contrast to early attempts at using logic BIBREF14, these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques.", "In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic BIBREF15, BIBREF16, our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity BIBREF17; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures BIBREF18, BIBREF19, and relies on a much smaller set of background knowledge and primitive relations than MacCartneyManning.", "To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark.", "Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance." ], [ "The goal of NLI is to determine, given a premise set $P$ and a hypothesis sentence $H$, whether $H$ follows from the meaning of $P$ BIBREF21. In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure FIGREF1. Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows $\\uparrow ,\\downarrow $, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation $\\downarrow $, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure FIGREF13) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction.", "In the following sections, we provide the details of our particular implementation of these different components in MonaLog." ], [ "Given an input premise $P$, MonaLog first polarizes each of its tokens and constituents, calling the system described by BIBREF17, which performs polarization on a CCG parse tree. For example, a polarized $P$ could be every$^{\\leavevmode {\\color {red}\\uparrow }}$ linguist$^{\\leavevmode {\\color {red}\\downarrow }}$ swim$^{\\leavevmode {\\color {red}\\uparrow }}$. Note that since we ignore morphology in the system, tokens are represented by lemmas." ], [ "MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\\le $ linguist and swim $\\le $ move, which captures the facts that $[\\![\\mbox{\\em semanticist}]\\!]$ denotes a subset of $[\\![\\mbox{\\em linguist}]\\!]$, and that $[\\![\\mbox{\\em swim}]\\!]$ denotes a subset of $[\\![\\mbox{\\em move}]\\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\\le $ y or x $\\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\\le $ most $\\le $ many $\\le $ a few $=$ several $\\le $ some $=$ a; the $\\le $ some $=$ a; on $\\perp $ off; up $\\perp $ down; etc.", "We also need to keep track of relations that can potentially be derived from the $P$-$H$ sentence pair. For instance, for all adjectives and nouns that appear in the sentence pair, it is easy to obtain: adj + n $\\le $ n (black cat $\\le $ cat). Similarly, we have n + PP/relative clause $\\le $ n (friend in need $\\le $ friend, dog that bites $\\le $ dog), VP + advP/PP $\\le $ VP (dance happily/in the morning $\\le $ dance), and so on. We also have rules that extract pieces of knowledge from $P$ directly, e.g.: n$_1$ $\\le $ n$_2$ from sentences of the pattern every n$_1$ is a n$_2$. One can also connect MonaLog to bigger knowledge graphs or ontologies such as DBpedia.", "A sentence base ${S}$, on the other hand, stores the generated entailments and contradictions." ], [ "Once we have a polarized CCG tree, and some $\\le $ relations in ${K}$, generating entailments and contradictions is fairly straightforward. A concrete example is given in Figure FIGREF13. Note that the generated $\\le $ instances are capable of producing mostly monotonicity inferences, but MonaLog can be extended to include other more complex inferences in natural logic, hence the name MonaLog. This extension is addressed in more detail in HuChenMoss." ], [ "The key operation for generating entailments is replacement, or substitution. It can be summarized as follows: 1) For upward-entailing (UE) words/constituents, replace them with words/constituents that denote bigger sets. 2) For downward-entailing (DE) words/constituents, either replace them with those denoting smaller sets, or add modifiers (adjectives, adverbs and/or relative clauses) to create a smaller set. Thus for every$^{\\leavevmode {\\color {red}\\uparrow }}$ linguist$^{\\leavevmode {\\color {red}\\downarrow }}$ swim$^{\\leavevmode {\\color {red}\\uparrow }}$, MonaLog can produce the following three entailments by replacing each word with the appropriate word from ${K}$: most$^{\\leavevmode {\\color {red}\\uparrow }}$ linguist$^{\\leavevmode {\\color {red}\\downarrow }}$ swim$^{\\leavevmode {\\color {red}\\uparrow }}$, every$^{\\leavevmode {\\color {red}\\uparrow }}$ semanticist$^{\\leavevmode {\\color {red}\\downarrow }}$ swim$^{\\leavevmode {\\color {red}\\uparrow }}$ and every$^{\\leavevmode {\\color {red}\\uparrow }}$ linguist$^{\\leavevmode {\\color {red}\\downarrow }}$ move$^{\\leavevmode {\\color {red}\\uparrow }}$. These are results of one replacement.", "Performing replacement for multiple rounds/depths can easily produce many more entailments." ], [ "To generate sentences contradictory to the input sentence, we do the following: 1) if the sentence starts with “no (some)”, replace the first word with “some (no)”. 2) If the object is quantified by “a/some/the/every”, change the quantifier to “no”, and vice versa. 3) Negate the main verb or remove the negation. See examples in Figure FIGREF13." ], [ "MonaLog returns Neutral if it cannot find the hypothesis $H$ in ${S}.entailments$ or ${S}.contradictions$. Thus, there is no need to generate neutral sentences." ], [ "Now that we have a set of inferences and contradictions stored in ${S}$, we can simply see if the hypothesis is in either one of the sets by comparing the strings. If yes, then return Entailment or Contradiction; if not, return Neutral, as schematically shown in Figure FIGREF13. However, the exact-string-match method is too brittle. Therefore, we apply a heuristic. If the only difference between sentences $S_1$ and $S_2$ is in the set {“a”, “be”, “ing”}, then $S_1$ and $S_2$ are considered semantically equivalent.", "The search is implemented using depth first search, with a default depth of 2, i.e. at most 2 replacements for each input sentence. At each node, MonaLog “expands” the sentence (i.e., an entailment of its parent) by obtaining its entailments and contradictions, and checks whether $H$ is in either set. If so, the search is terminated; otherwise the systems keeps searching until all the possible entailments and contradictions up to depth 2 have been visited." ], [ "We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT." ], [ "The SICK BIBREF1 dataset includes around 10,000 English sentence pairs that are annotated to have either “Entailment”, “Neutral” or “Contradictory” relations. We choose SICK as our testing ground for several reasons. First, we want to test on a large-scale dataset, since we have shown that a similar model BIBREF24 reaches good results on parts of the smaller FraCaS dataset BIBREF25. Second, we want to make our results comparable to those of previous logic-based models such as the ones described in BIBREF26, BIBREF27, BIBREF11, BIBREF13, which were also tested on SICK. We use the data split provided in the dataset: 4,439 training problems, 4,906 test problems and 495 trial problems, see Table TABREF16 for examples." ], [ "There are numerous issues with the original SICK dataset, as illustrated by BIBREF28, BIBREF29.", "They first manually checked 1,513 pairs tagged as “A entails B but B is neutral to A” (AeBBnA) in the original SICK, correcting 178 pairs that they considered to be wrong BIBREF28. Later, BIBREF29 extracted pairs from SICK whose premise and hypothesis differ in only one word, and created a simple rule-based system that used WordNet information to solve the problem. Their WordNet-based method was able to solve 1,651 problems, whose original labels in SICK were then manually checked and corrected against their system's output. They concluded that 336 problems are wrongly labeled in the original SICK. Combining the above two corrected subsets of SICK, minus the overlap, results in their corrected SICK dataset, which has 3,016 problems (3/10 of the full SICK), with 409 labels different from the original SICK (see breakdown in Table TABREF19). 16 of the corrections are in the trial set, 197 of them in the training set and 196 in the test set. This suggests that more than one out of ten problems in SICK are potentially problematic. For this reason, two authors of the current paper checked the 409 changes. We found that only 246 problems are labeled the same by our team and by BIBREF29. For cases where there is disagreement, we adjudicated the differences after a discussion.", "We are aware that the partially checked SICK (by two teams) is far from ideal. We therefore present results for two versions of SICK for experiment 1 (section SECREF4): the original SICK and the version corrected by our team. For the data augmentation experiment in section SECREF5, we only performed fine-tuning on the corrected SICK. As shown in a recent SICK annotation experiment by kalouli2019explaining, annotation is a complicated issue influenced by linguistic and non-linguistic factors. We leave checking the full SICK dataset to future work." ], [ "The goal of experiment 1 is to test how accurately MonaLog solves problems in a large-scale dataset. We first used the system to solve the 495 problems in the trial set and then manually identified the cases in which the system failed. Then we determined which syntactic transformations are needed for MonaLog. After improving the results on the trial data by introducing a preprocessing step to handle limited syntactic variation (see below), we applied MonaLog on the test set. This means that the rule base of the system was optimized on the trial data, and we can test its generalization capability on the test data.", "The main obstacle for MonaLog is the syntactic variations in the dataset, illustrated in some examples in Table TABREF16. There exist multiple ways of dealing with these variations: One approach is to `normalize' unknown syntactic structures to a known structure. For example, we can transform passive sentences into active ones and convert existential sentences into the base form (see ex. 8399 and 219 in Table TABREF16). Another approach is to use some more abstract syntactic/semantic representation so that the linear word order can largely be ignored, e.g., represent a sentence by its dependency parse, or use Abstract Meaning Representation. Here, we explore the first option and leave the second approach to future work. We believe that dealing with a wide range of syntactic variations requires tools designed specifically for that purpose. The goal of MonaLog is to generate entailments and contradictions based on a polarized sentence instead.", "Below, we list the most important syntactic transformations we perform in preprocessing.", "Convert all passive sentences to active using pass2act. If the passive does not contain a by phrase, we add by a person.", "Convert existential clauses into their base form (see ex. 219 in Table TABREF16).", "Other transformations: someone/anyone/no one $\\rightarrow ~$some/any/no person; there is no man doing sth. $\\rightarrow ~$no man is doing sth.; etc." ], [ "The results of our system on uncorrected and corrected SICK are presented in Table TABREF27, along with comparisons with other systems.", "Our accuracy on the uncorrected SICK (77.19%) is much higher than the majority baseline (56.36%) or the hypothesis-only baseline (56.87%) reported by BIBREF8, and only several points lower than current logic-based systems. Since our system is based on natural logic, there is no need for translation into logical forms, which makes the reasoning steps transparent and much easier to interpret. I.e., with entailments and contradictions, we can generate a natural language trace of the system, see Fig. FIGREF13.", "Our results on the corrected SICK are even higher (see lower part of Table TABREF27), demonstrating the effect of data quality on the final results. Note that with some simple syntactic transformations we can gain 1-2 points in accuracy.", "Table TABREF28 shows MonaLog's performance on the individual relations. The system is clearly very good at identifying entailments and contradictions, as demonstrated by the high precision values, especially on the corrected SICK set (98.50 precision for E and 95.02 precision for C). The lower recall values are due to MonaLog's current inability to handle syntactic variation.", "Based on these results, we tested a hybrid model of MonaLog and BERT (see Table TABREF27) where we exploit MonaLog's strength: Since MonaLog has a very high precision on Entailment and Contradiction, we can always trust MonaLog if it predicts E or C; when it returns N, we then fall back to BERT. This hybrid model improves the accuracy of BERT by 1% absolute to 85.95% on the corrected SICK. On the uncorrected SICK dataset, the hybrid system performs worse than BERT.", "Since MonaLog is optimized for the corrected SICK, it may mislabel many E and C judgments in the uncorrected dataset. The stand-alone BERT system performs better on the uncorrected data (86.74%) than the corrected set (85.00%). The corrected set may be too inconsistent since only a part has been checked.", "Overall, these hybird results show that it is possible to combine our high-precision system with deep learning architectures. However, more work is necessary to optimize this combined system." ], [ "Upon closer inspection, some of MonaLog's errors consist of difficult cases, as shown in Table TABREF29. For example, in ex. 359, if our knowledge base ${K}$ contains the background fact $\\mbox{\\em chasing} \\le \\mbox{\\em running}$, then MonaLog's judgment of C would be correct. In ex. 1402, if crying means screaming, then the label should be E; however, if crying here means shedding tears, then the label should probably be N. Here we also see potentially problematic labels (ex. 1760, 3403) in the original SICK dataset.", "Another point of interest is that 19 of MonaLog's mistakes are related to the antonym pair man vs. woman (e.g., ex. 5793 in Table TABREF29). This points to inconsistency of the SICK dataset: Whereas there are at least 19 cases tagged as Neutral (e.g., ex. 5793), there are at least 17 such pairs that are annotated as Contradictions in the test set (e.g., ex. 3521), P: A man is dancing, H: A woman is dancing (ex. 9214), P: A shirtless man is jumping over a log, H: A shirtless woman is jumping over a log. If man and woman refer to the same entity, then clearly that entity cannot be man and woman at the same time, which makes the sentence pair a contradiction. If, however, they do not refer to the same entity, then they should be Neutral." ], [ "Our second experiment focuses on using MonaLog to generate additional training data for machine learning models such as BERT. To our knowledge, this is the first time that a rule-based NLI system has been successfully used to generate training data for a deep learning application." ], [ "As described above, MonaLog generates entailments and contradictions when solving problems. These can be used as additional training data for a machine learning model.", "I.e., we pair the newly generated sentences", "with their input sentence, creating new pairs for training. For example, we take all the sentences in the nodes in Figure FIGREF13 as inferences and all the sentences in rectangles as contradictions, and then form sentence pairs with the input sentence. The additional data can be used directly, almost without human intervention.", "Thus for experiment 2, the goal is to examine the quality of these generated sentence pairs. For this, we re-train a BERT model on these pairs. If BERT trained on the manually annotated SICK training data is improved by adding data generated by MonaLog, then we can conclude that the generated data is of high quality, even comparable to human annotated data, which is what we found.", "More specifically, we compare the performance of BERT models trained on a) SICK training data alone, and b) SICK training data plus the entailing and contradictory pairs generated by MonaLog.", "All experiments are carried out using our corrected version of the SICK data set.", "However, note that MonaLog is designed to only generate entailments and contradictions. Thus, we only have access to newly generated examples for those two cases, we do not acquire any additional neutral cases. Consequently, adding these examples to the training data will introduce a skewing that does not reflect the class distribution in the test set. Since this will bias the machine learner against neutral cases, we use the following strategy to counteract that tendency: We relabel all cases where BERT is not confident enough for either E or C into N. We set this threshold to 0.95 but leave further optimization of the threshold to future work." ], [ "MonaLog is prone to over-generation. For example, it may wrongly add the same adjective before a noun (phrase) twice to create a more specific noun, e.g., young young man $\\le $ young man $\\le $ man. Since it is possible that such examples influence the machine learning model negatively, we look into filtering such examples to improve the quality of the additional training data.", "We manually inspected 100 sentence pairs generated by MonaLog to check the quality and naturalness of the new sentences (see Table TABREF32 for examples). All of the generated sentences are correct in the sense that the relation between the premise and the hypothesis is correctly labeled as entailment or contradiction (see Table TABREF34).", "While we did not find any sentence pairs with wrong labels, some generated sentences are unnatural, as shown in Table TABREF32. Both unnatural examples contain two successive copies of the same PP.", "Note that our data generation hinges on correct polarities on the words and constituents. For instance, in the last example of Table TABREF32, the polarization system needs to know that few is downward entailing on both of its arguments, and without flips the arrow of its argument, in order to produce the correct polarities, on which the replacement of MonaLog depends.", "In order to filter unnatural sentences, such as the examples in Table TABREF32, we use a rule-based filter and remove sentences that contain bigrams of repeated words. We experiment with using one quarter or one half randomly selected sentences in addition to a setting where we use the complete set of generated sentences." ], [ "Table TABREF37 shows the amount of additional sentence pairs per category along with the results of using the automatically generated sentences as additional training data. It is obvious that adding the additional training data results in gains in accuracy even though the training data becomes increasingly skewed towards E and C. When we add all additional sentence pairs, accuracy increases by more than 1.5 percent points. This demonstrates both the robustness of BERT in the current experiment and the usefulness of the generated data. The more data we add, the better the system performs.", "We also see that raising the threshold to relabel uncertain cases as neutral gives a small boost, from 86.51% to 86.71%. This translates into 10 cases where the relabeling corrected the answer. Finally, we also investigated whether the hybrid system, i.e., MonaLog followed by the re-trained BERT, can also profit from the additional training data. Intuitively, we would expect smaller gains since MonaLog already handles a fair amount of the entailments and contradictions, i.e., those cases where BERT profits from more examples. However the experiments show that the hybrid system reaches an even higher accuracy of 87.16%, more than 2 percent points above the baseline, equivalent to roughly 100 more problems correctly solved. Setting the high threshold for BERT to return E or C further improves accuracy to 87.49%. This brings us into the range of the state-of-the-art results, even though a direct comparison is not possible because of the differences between the corrected and uncorrected dataset." ], [ "We have presented a working natural-logic-based system, MonaLog, which attains high accuracy on the SICK dataset and can be used to generated natural logic proofs. Considering how simple and straightforward our method is, we believe it can serve as a strong baseline or basis for other (much) more complicated systems, either logic-based or ML/DL-based. In addition, we have shown that MonaLog can generate high-quality training data, which improves the accuracy of a deep learning model when trained on the expanded dataset. As a minor point, we manually checked the corrected SICK dataset by BIBREF28, BIBREF29.", "There are several directions for future work. The first direction concerns the question how to handle syntactic variation from natural language input. That is, the computational process(es) for inference will usually be specified in terms of strict syntactic conditions, and naturally occurring sentences will typically not conform to those conditions. Among the strategies which allow their systems to better cope with premises and hypotheses with various syntactic structures are sophisticated versions of alignment used by e.g. MacCartney,YanakaMMB18. We will need to extend MonaLog to be able to handle such variation. In the future, we plan to use dependency relations as representations of natural language input and train a classifier that can determine which relations are crucial for inference.", "Second, as mentioned earlier, we are in need of a fully (rather than partially) checked SICK dataset to examine the impact of data quality on the results since the partially checked dataset may be inherently inconsistent between the checked and non-checked parts.", "Finally, with regard to the machine learning experiments, we plan to investigate other methods of addressing the imbalance in the training set created by additional entailments and contradictions. We will look into options for artificially creating neutral examples, e.g. by finding reverse entailments, as illustrated by richardson2019probing." ], [ "We thank the anonymous reviewers for their helpful comments. Hai Hu is supported by China Scholarship Council." ] ], "section_name": [ "Introduction", "Our System: MonaLog", "Our System: MonaLog ::: Polarization (Arrow Tagging)", "Our System: MonaLog ::: Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@", "Our System: MonaLog ::: Generation", "Our System: MonaLog ::: Generation ::: Entailments/inferences", "Our System: MonaLog ::: Generation ::: Contradictory sentences", "Our System: MonaLog ::: Generation ::: Neutral sentences", "Our System: MonaLog ::: Search", "MonaLog and SICK", "MonaLog and SICK ::: The SICK Dataset", "MonaLog and SICK ::: Hand-corrected SICK", "Experiment 1: Using MonaLog Directly ::: Setup and Preprocessing", "Experiment 1: Using MonaLog Directly ::: Results", "Experiment 1: Using MonaLog Directly ::: Error Analysis", "Experiment 2: Data Generation Using MonaLog", "Experiment 2: Data Generation Using MonaLog ::: Setup", "Experiment 2: Data Generation Using MonaLog ::: Data Filtering and Quality Control", "Experiment 2: Data Generation Using MonaLog ::: Results", "Conclusions and Future Work", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "eb8e6c6ed92b80cc42bac8e39e480ca641184eeb" ], "answer": [ { "evidence": [ "To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery)." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "8c8fd01ce3f04cc760a32b2af57ed347c837d639" ], "answer": [ { "evidence": [ "In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic BIBREF15, BIBREF16, our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity BIBREF17; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures BIBREF18, BIBREF19, and relies on a much smaller set of background knowledge and primitive relations than MacCartneyManning.", "Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance.", "We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT." ], "extractive_spans": [], "free_form_answer": "They use Monalog for data-augmentation to fine-tune BERT on this task", "highlighted_evidence": [ "In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86.", "Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. ", "We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "472444c166c4bb6e942e0b2d97a61f3b74b3f953" ], "answer": [ { "evidence": [ "MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\\le $ linguist and swim $\\le $ move, which captures the facts that $[\\![\\mbox{\\em semanticist}]\\!]$ denotes a subset of $[\\![\\mbox{\\em linguist}]\\!]$, and that $[\\![\\mbox{\\em swim}]\\!]$ denotes a subset of $[\\![\\mbox{\\em move}]\\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\\le $ y or x $\\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\\le $ most $\\le $ many $\\le $ a few $=$ several $\\le $ some $=$ a; the $\\le $ some $=$ a; on $\\perp $ off; up $\\perp $ down; etc." ], "extractive_spans": [], "free_form_answer": "They derive it from Wordnet", "highlighted_evidence": [ "MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\\le $ linguist and swim $\\le $ move, which captures the facts that $[\\![\\mbox{\\em semanticist}]\\!]$ denotes a subset of $[\\![\\mbox{\\em linguist}]\\!]$, and that $[\\![\\mbox{\\em swim}]\\!]$ denotes a subset of $[\\![\\mbox{\\em move}]\\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\\le $ y or x $\\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\\le $ most $\\le $ many $\\le $ a few $=$ several $\\le $ some $=$ a; the $\\le $ some $=$ a; on $\\perp $ off; up $\\perp $ down; etc." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do they beat current state-of-the-art on SICK?", "How do they combine MonaLog with BERT?", "How do they select monotonicity facts?" ], "question_id": [ "ee27e5b56e439546d710ce113c9be76e1bfa1a3d", "4688534a07a3cbd8afa738eea02cc6981a4fd285", "45893f31ef07f0cca5783bd39c4e60630d6b93b3" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: An illustration of our general monotonicity reasoning pipeline using an example premise and hypothesis pair: All schoolgirls are on the train and All happy schoolgirls are on the train.", "Figure 2: Example search tree for SICK 340, where P is A schoolgirl with a black bag is on a crowded train, with the H : A girl with a black bag is on a crowded train. Only one replacement is allowed at each step.", "Table 1: Examples from SICK (Marelli et al., 2014) and corrected SICK (Kalouli et al., 2017, 2018) w/ syntactic variations. n.a.:example not checked by Kalouli and her colleagues. C: contradiction; E: entailment; N: neutral.", "Table 3: Performance on the SICK test set, original SICK above and corrected SICK below. P / R for MonaLog averaged across three labels. Results involving BERT are averaged across six runs; same for later experiments.", "Table 4: Results of MonaLog per relation. C: contradiction; E: entailment; N: neutral.", "Table 5: Examples of incorrect answers by MonaLog; n.a. = the problem has not been checked in corr. SICK.", "Table 6: Sentence pairs generated by MonaLog, lemmatized.", "Table 7: Quality of 100 manually inspected sentences.", "Table 8: Results of BERT trained on MonaLog-generated entailments and contradictions plus SICK.train (using the corrected SICK set)." ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png", "5-Table1-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "8-Table6-1.png", "8-Table7-1.png", "9-Table8-1.png" ] }
[ "How do they combine MonaLog with BERT?", "How do they select monotonicity facts?" ]
[ [ "1910.08772-Introduction-3", "1910.08772-Introduction-1", "1910.08772-MonaLog and SICK-0" ], [ "1910.08772-Our System: MonaLog ::: Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@-0" ] ]
[ "They use Monalog for data-augmentation to fine-tune BERT on this task", "They derive it from Wordnet" ]
537
1909.11467
Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish
Kurdish is a less-resourced language consisting of different dialects written in various scripts. Approximately 30 million people in different countries speak the language. The lack of corpora is one of the main obstacles in Kurdish language processing. In this paper, we present KTC-the Kurdish Textbooks Corpus, which is composed of 31 K-12 textbooks in Sorani dialect. The corpus is normalized and categorized into 12 educational subjects containing 693,800 tokens (110,297 types). Our resource is publicly available for non-commercial use under the CC BY-NC-SA 4.0 license.
{ "paragraphs": [ [ "Kurdish is an Indo-European language mainly spoken in central and eastern Turkey, northern Iraq and Syria, and western Iran. It is a less-resourced language BIBREF0, in other words, a language for which general-purpose grammars and raw internet-based corpora are the main existing resources. The language is spoken in five main dialects, namely, Kurmanji (aka Northern Kurdish), Sorani (aka Central Kurdish), Southern Kurdish, Zazaki and Gorani BIBREF1.", "Creating lexical databases and text corpora are essential tasks in natural language processing (NLP) development. Text corpora are knowledge repositories which provide semantic descriptions of words. The Kurdish language lacks diverse corpora in both raw and annotated forms BIBREF2, BIBREF3. According to the literature, there is no domain-specific corpus for Kurdish.", "In this paper, we present KTC, a domain-specific corpus containing K-12 textbooks in Sorani. We consider a domain as a set of related concepts, and a domain-specific corpus as a collection of documents relevant to those concepts BIBREF4. Accordingly, we introduce KTC as a domain-specific corpus because it is based on the textbooks which have been written and compiled by a group of experts, appointed by the Ministry of Education (MoE) of the Kurdistan Region of Iraq, for educational purposes at the K-12 level. The textbooks are selected, written, compiled, and edited by experts in each subject and also by language editors based on a unified grammar and orthography. This corpus was initially collected as an accurate source for developing a Sorani Kurdish spellchecker for scientific writing. KTC contains a range of subjects, and its content is categorized according to those subjects. Given the accuracy of the text from scientific, grammatical, and orthographic points of view, we believe that it is also a fine-grained resource. The corpus will contribute to various NLP tasks in Kurdish, particularly in language modeling and grammatical error correction.", "In the rest of this paper, Section SECREF2 reviews the related work, Section SECREF3 presents the corpus, Section SECREF4 addresses the challenges in the project and, Section SECREF5 concludes the paper." ], [ "Although the initiative to create a corpus for Kurdish dates back to 1998 BIBREF5, efforts in creating machine-readable corpora for Kurdish are recent. The first machine-readable corpus for Kurdish is the Leipzig Corpora Collection which is constructed using different sources on the Web BIBREF6. Later, Pewan BIBREF2 and Bianet BIBREF7 were developed as general-purpose corpora based on news articles. Kurdish corpora are also constructed for specific tasks such as dialectology BIBREF8, BIBREF3, machine transliteration BIBREF9, and part-of-speech (POS) annotation BIBREF10, BIBREF11. However, to the best of our knowledge, currently, there is no domain-specific corpus for Kurdish dialects." ], [ "KTC is composed of 31 educational textbooks published from 2011 to 2018 in various topics by the MoE. We received the material from the MoE partly in different versions of Microsoft Word and partly in Adobe InDesign formats. In the first step, we categorized each textbook based on the topics and chapters. As the original texts were not in Unicode, we converted the content to Unicode. This step was followed by a pre-processing stage where the texts were normalized by replacing zero-width-non-joiner (ZWNJ) BIBREF2 and manually verifying the orthography based on the reference orthography of the Kurdistan Region of Iraq. In the normalization process, we did not remove punctuation and special characters so that the corpus can be easily adapted our current task and also to future tasks where the integrity of the text may be required.", "The students could choose to go to the Institutes instead of High Schools after the Secondary School. The Institutes focus on professional and technical education aiming at training technicians.", "As an experiment, we present the top 15 most used tokens of the textbooks in KTC, which are illustrated in Figure FIGREF4. We observe that the most frequent tokens such as (ئابوورى > (economics), بازەرگانى > (business)) in economics, (=,$\\times $ and وزەى > (energy)) in physics, and (خوداى > (god), گەورە > (great) and واتە > (meaning)) in theology are conjunctions, prepositions, pronouns or punctuation. These are not descriptive of any one subject, while each subject's top tokens are descriptive of its content. The plot in Figure FIGREF4 follows Zipf's law to some extent, wherein the frequency of a word is proportional to its rank BIBREF12. Here, not only the words but also the punctuation and special characters are also considered tokens (see Section SECREF1).", "The corpus is available at https://github.com/KurdishBLARK/KTC.", "width=7cm,compat=1.9" ], [ "Previously, researchers have addressed the challenges in Kurdish corpora development BIBREF2, BIBREF13, BIBREF3. We highlight two main challenges we faced during the KTC development. First, most of the written Kurdish resources have not been digitized BIBREF14, or they are either not publicly available or are not fully convertible. Second, Kurdish text processing suffers from different orthographic issues BIBREF9 mainly due to the lack of standard orthography and the usage of non-Unicode keyboards. Therefore, we carried out a semi-automatic conversion, which made the process costly in terms of time and human assistance." ], [ "We presented KTC–the Kurdish Textbook Corpus, as the first domain-specific corpus for Sorani Kurdish. This corpus will pave the way for further developments in Kurdish language processing. We have mad the corpus available at https://github.com/KurdishBLARK/KTC for non-commercial use. We are currently working on a project on the Sorani spelling error detection and correction. As future work, we are aiming to develop a similar corpus for all Kurdish dialects, particularly Kurmanji." ], [ "We would like to appreciate the generous assistance of the Ministry of Education of the Kurdistan Region of Iraq, particularly the General Directorate of Curriculum and Printing, for providing us with the data for the KTC corpus. Our special gratitude goes to Ms. Namam Jalal Rasheed and Mr. Kawa Omer Muhammad for their assistance in making the required data available and resolving of the copyright issues." ] ], "section_name": [ "Introduction", "Related work", "The Corpus", "Challenges", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "82024c55c3aabf914b901626dcaf5d5dcd9a3f56" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "a3bf6742ff9bacc33920eaf33752464a7489656c" ], "answer": [ { "evidence": [ "KTC is composed of 31 educational textbooks published from 2011 to 2018 in various topics by the MoE. We received the material from the MoE partly in different versions of Microsoft Word and partly in Adobe InDesign formats. In the first step, we categorized each textbook based on the topics and chapters. As the original texts were not in Unicode, we converted the content to Unicode. This step was followed by a pre-processing stage where the texts were normalized by replacing zero-width-non-joiner (ZWNJ) BIBREF2 and manually verifying the orthography based on the reference orthography of the Kurdistan Region of Iraq. In the normalization process, we did not remove punctuation and special characters so that the corpus can be easily adapted our current task and also to future tasks where the integrity of the text may be required." ], "extractive_spans": [ "by replacing zero-width-non-joiner (ZWNJ) BIBREF2 and manually verifying the orthography" ], "free_form_answer": "", "highlighted_evidence": [ "This step was followed by a pre-processing stage where the texts were normalized by replacing zero-width-non-joiner (ZWNJ) BIBREF2 and manually verifying the orthography based on the reference orthography of the Kurdistan Region of Iraq." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "f9f7e9b8ed3ab40e8cef9695347365f65c3c635e" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 ." ], "extractive_spans": [], "free_form_answer": "Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "9f92348ea95f70bce6c5de4d08b4adfd0e2d905c" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "47fa7c63bc2d7293aef0919d4bc92e0e66d6f2ba" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ], "nlp_background": [ "", "", "", "two", "two" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "Is the corpus annotated?", "How is the corpus normalized?", "What are the 12 categories devised?", "Is the corpus annotated with a phonetic transcription?", "Is the corpus annotated with Part-of-Speech tags?" ], "question_id": [ "fb1c2ff0872084241b9725b4f07750bd3e1df793", "9d9f6cc0f026f7168fcea461baff4b8a925a185f", "3d6015d722de6e6297ba7bfe7cb0f8a67f660636", "2cc63f42410eff3bcb15cfddc593d8aab9413eea", "0a9ced54324e70973354978cccef1c70dee5a543" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "dialect", "dialect", "dialect", "dialects", "dialects" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 .", "Figure 1: Common tokens among textbook subjects." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png" ] }
[ "What are the 12 categories devised?" ]
[ [ "1909.11467-2-Table1-1.png" ] ]
[ "Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study" ]
541
1804.08186
Automatic Language Identification in Texts: A Survey
Language identification (LI) is the problem of determining the natural language that a document or part thereof is written in. Automatic LI has been extensively researched for over fifty years. Today, LI is a key part of many text processing pipelines, as text processing techniques generally assume that the language of the input text is known. Research in this area has recently been especially active. This article provides a brief history of LI research, and an extensive survey of the features and methods used so far in the LI literature. For describing the features and methods we introduce a unified notation. We discuss evaluation methods, applications of LI, as well as off-the-shelf LI systems that do not require training by the end user. Finally, we identify open issues, survey the work to date on each issue, and propose future directions for research in LI.
{ "paragraphs": [ [ "Language identification (“”) is the task of determining the natural language that a document or part thereof is written in. Recognizing text in a specific language comes naturally to a human reader familiar with the language. intro:langid presents excerpts from Wikipedia articles in different languages on the topic of Natural Language Processing (“NLP”), labeled according to the language they are written in. Without referring to the labels, readers of this article will certainly have recognized at least one language in intro:langid, and many are likely to be able to identify all the languages therein.", "Research into aims to mimic this human ability to recognize specific languages. Over the years, a number of computational approaches have been developed that, through the use of specially-designed algorithms and indexing structures, are able to infer the language being used without the need for human intervention. The capability of such systems could be described as super-human: an average person may be able to identify a handful of languages, and a trained linguist or translator may be familiar with many dozens, but most of us will have, at some point, encountered written texts in languages they cannot place. However, research aims to develop systems that are able to identify any human language, a set which numbers in the thousands BIBREF0 .", "In a broad sense, applies to any modality of language, including speech, sign language, and handwritten text, and is relevant for all means of information storage that involve language, digital or otherwise. However, in this survey we limit the scope of our discussion to of written text stored in a digitally-encoded form.", "Research to date on has traditionally focused on monolingual documents BIBREF1 (we discuss for multilingual documents in openissues:multilingual). In monolingual , the task is to assign each document a unique language label. Some work has reported near-perfect accuracy for of large documents in a small number of languages, prompting some researchers to label it a “solved task” BIBREF2 . However, in order to attain such accuracy, simplifying assumptions have to be made, such as the aforementioned monolinguality of each document, as well as assumptions about the type and quantity of data, and the number of languages considered.", "The ability to accurately detect the language that a document is written in is an enabling technology that increases accessibility of data and has a wide variety of applications. For example, presenting information in a user's native language has been found to be a critical factor in attracting website visitors BIBREF3 . Text processing techniques developed in natural language processing and Information Retrieval (“IR”) generally presuppose that the language of the input text is known, and many techniques assume that all documents are in the same language. In order to apply text processing techniques to real-world data, automatic is used to ensure that only documents in relevant languages are subjected to further processing. In information storage and retrieval, it is common to index documents in a multilingual collection by the language that they are written in, and is necessary for document collections where the languages of documents are not known a-priori, such as for data crawled from the World Wide Web. Another application of that predates computational methods is the detection of the language of a document for routing to a suitable translator. This application has become even more prominent due to the advent of Machine Translation (“MT”) methods: in order for MT to be applied to translate a document to a target language, it is generally necessary to determine the source language of the document, and this is the task of . also plays a part in providing support for the documentation and use of low-resource languages. One area where is frequently used in this regard is in linguistic corpus creation, where is used to process targeted web crawls to collect text resources for low-resource languages.", "A large part of the motivation for this article is the observation that lacks a “home discipline”, and as such, the literature is fragmented across a number of fields, including NLP, IR, machine learning, data mining, social medial analysis, computer science education, and systems science. This has hampered the field, in that there have been many instances of research being carried out with only partial knowledge of other work on the topic, and the myriad of published systems and datasets.", "Finally, it should be noted that this survey does not make a distinction between languages, language varieties, and dialects. Whatever demarcation is made between languages, varieties and dialects, a system is trained to identify the associated document classes. Of course, the more similar two classes are, the more challenging it is for a system to discriminate between them. Training a system to discriminate between similar languages such as Croatian and Serbian BIBREF4 , language varieties like Brazilian and European Portuguese BIBREF5 , or a set of Arabic dialects BIBREF6 is more challenging than training systems to discriminate between, for example, Japanese and Finnish. Even so, as evidenced in this article, from a computational perspective, the algorithms and features used to discriminate between languages, language varieties, and dialects are identical." ], [ "is in some ways a special case of text categorization, and previous research has examined applying standard text categorization methods to BIBREF7 , BIBREF8 .", " BIBREF9 provides a definition of text categorization, which can be summarized as the task of mapping a document onto a pre-determined set of classes. This is a very broad definition, and indeed one that is applicable to a wide variety of tasks, amongst which falls modern-day . The archetypal text categorization task is perhaps the classification of newswire articles according to the topics that they discuss, exemplified by the Reuters-21578 dataset BIBREF10 . However, has particular characteristics that make it different from typical text categorization tasks:", "These distinguishing characteristics present unique challenges and offer particular opportunities, so much so that research in has generally proceeded independently of text categorization research. In this survey, we will examine the common themes and ideas that underpin research in . We begin with a brief history of research that has led to modern (history), and then proceed to review the literature, first introducing the mathematical notation used in the article (notation), and then providing synthesis and analysis of existing research, focusing specifically on the representation of text (features) and the learning algorithms used (methods). We examine the methods for evaluating the quality of the systems (evaluation) as well as the areas where has been applied (applications), and then provide an overview of “off-the-shelf” systems (ots). We conclude the survey with a discussion of the open issues in (openissues), enumerating issues and existing efforts to address them, as well as charting the main directions where further research in is required." ], [ "Although there are some dedicated survey articles, these tend to be relatively short; there have not been any comprehensive surveys of research in automated LI of text to date. The largest survey so far can be found in the literature review of Marco Lui's PhD thesis BIBREF11 , which served as an early draft and starting point for the current article. BIBREF12 provides a historical overview of language identification focusing on the use of language models. BIBREF13 gives a brief overview of some of the methods used for , and BIBREF14 provide a review of some of the techniques and applications used previously. BIBREF15 gives a short overview of some of the challenges, algorithms and available tools for . BIBREF16 provides a brief summary of , how it relates to other research areas, and some outstanding challenges, but only does so in general terms and does not go into any detail about existing work in the area. Another brief article about is BIBREF17 , which covers both of spoken language as well as of written documents, and also discusses of documents stored as images rather than digitally-encoded text." ], [ "as a task predates computational methods – the earliest interest in the area was motivated by the needs of translators, and simple manual methods were developed to quickly identify documents in specific languages. The earliest known work to describe a functional program for text is by BIBREF18 , a statistician, who used multiple discriminant analysis to teach a computer how to distinguish, at the word level, between English, Swedish and Finnish. Mustonen compiled a list of linguistically-motivated character-based features, and trained his language identifier on 300 words for each of the three target languages. The training procedure created two discriminant functions, which were tested with 100 words for each language. The experiment resulted in 76% of the words being correctly classified; even by current standards this percentage would be seen as acceptable given the small amount of training material, although the composition of training and test data is not clear, making the experiment unreproducible.", "In the early 1970s, BIBREF19 considered the problem of automatic . According to BIBREF20 and the available abstract of Nakamura's article, his language identifier was able to distinguish between 25 languages written with the Latin alphabet. As features, the method used the occurrence rates of characters and words in each language. From the abstract it seems that, in addition to the frequencies, he used some binary presence/absence features of particular characters or words, based on manual .", " BIBREF20 wrote his master's thesis “Language Identification by Statistical Analysis” for the Naval Postgraduate School at Monterey, California. The continued interest and the need to use of text in military intelligence settings is evidenced by the recent articles of, for example, BIBREF21 , BIBREF22 , BIBREF23 , and BIBREF24 . As features for , BIBREF20 used, e.g., the relative frequencies of characters and character bigrams. With a majority vote classifier ensemble of seven classifiers using Kolmogor-Smirnov's Test of Goodness of Fit and Yule's characteristic ( INLINEFORM0 ), he managed to achieve 89% accuracy over 53 characters when distinguishing between English and Spanish. His thesis actually includes the identifier program code (for the IBM System/360 Model 67 mainframe), and even the language models in printed form.", "Much of the earliest work on automatic was focused on identification of spoken language, or did not make a distinction between written and spoken language. For example, the work of BIBREF25 is primarily focused on of spoken utterances, but makes a broader contribution in demonstrating the feasibility of on the basis of a statistical model of broad phonetic information. However, their experiments do not use actual speech data, but rather “synthetic” data in the form of phonetic transcriptions derived from written text.", "Another subfield of speech technology, speech synthesis, has also generated a considerable amount of research in the of text, starting from the 1980s. In speech synthesis, the need to know the source language of individual words is crucial in determining how they should be pronounced. BIBREF26 uses the relative frequencies of character trigrams as probabilities and determines the language of words using a Bayesian model. Church explains the method – that has since been widely used in LI – as a small part of an article concentrating on many aspects of letter stress assignment in speech synthesis, which is probably why BIBREF27 is usually attributed to being the one to have introduced the aforementioned method to of text. As Beesley's article concentrated solely on the problem of LI, this single focus probably enabled his research to have greater visibility. The role of the program implementing his method was to route documents to MT systems, and Beesley's paper more clearly describes what has later come to be known as a character model. The fact that the distribution of characters is relatively consistent for a given language was already well known.", "The highest-cited early work on automatic is BIBREF7 . Cavnar and Trenkle's method (which we describe in detail in outofplace) builds up per-document and per-language profiles, and classifies a document according to which language profile it is most similar to, using a rank-order similarity metric. They evaluate their system on 3478 documents in eight languages obtained from USENET newsgroups, reporting a best overall accuracy of 99.8%. Gertjan van Noord produced an implementation of the method of Cavnar and Trenkle named , which has become eponymous with the method itself. is packaged with pre-trained models for a number of languages, and so it is likely that the strong results reported by Cavnar and Trenkle, combined with the ready availability of an “off-the-shelf” implementation, has resulted in the exceptional popularity of this particular method. BIBREF7 can be considered a milestone in automatic , as it popularized the use of automatic methods on character models for , and to date the method is still considered a benchmark for automatic ." ], [ "This section introduces the notation used throughout this article to describe methods. We have translated the notation in the original papers to our notation, to make it easier to see the similarities and differences between the methods presented in the literature. The formulas presented could be used to implement language identifiers and re-evaluate the studies they were originally presented in.", "A corpus INLINEFORM0 consists of individual tokens INLINEFORM1 which may be bytes, characters or words. INLINEFORM2 is comprised of a finite sequence of individual tokens, INLINEFORM3 . The total count of individual tokens INLINEFORM4 in INLINEFORM5 is denoted by INLINEFORM6 . In a corpus INLINEFORM7 with non-overlapping segments INLINEFORM8 , each segment is referred to as INLINEFORM9 , which may be a short document or a word or some other way of segmenting the corpus. The number of segments is denoted as INLINEFORM10 .", "A feature INLINEFORM0 is some countable characteristic of the corpus INLINEFORM1 . When referring to the set of all features INLINEFORM2 in a corpus INLINEFORM3 , we use INLINEFORM4 , and the number of features is denoted by INLINEFORM5 . A set of unique features in a corpus INLINEFORM6 is denoted by INLINEFORM7 . The number of unique features is referred to as INLINEFORM8 . The count of a feature INLINEFORM9 in the corpus INLINEFORM10 is referred to as INLINEFORM11 . If a corpus is divided into segments INLINEFORM12 , the count of a feature INLINEFORM13 in INLINEFORM14 is defined as the sum of counts over the segments of the corpus, i.e. INLINEFORM15 . Note that the segmentation may affect the count of a feature in INLINEFORM16 as features do not cross segment borders.", "A frequently-used feature is an , which consists of a contiguous sequence of INLINEFORM0 individual tokens. An starting at position INLINEFORM1 in a corpus segment is denoted INLINEFORM2 , where positions INLINEFORM3 remain within the same segment of the corpus as INLINEFORM4 . If INLINEFORM5 , INLINEFORM6 is an individual token. When referring to all of length INLINEFORM7 in a corpus INLINEFORM8 , we use INLINEFORM9 and the count of all such is denoted by INLINEFORM10 . The count of an INLINEFORM11 in a corpus segment INLINEFORM12 is referred to as INLINEFORM13 and is defined by count: DISPLAYFORM0 ", "The set of languages is INLINEFORM0 , and INLINEFORM1 denotes the number of languages. A corpus INLINEFORM2 in language INLINEFORM3 is denoted by INLINEFORM4 . A language model INLINEFORM5 based on INLINEFORM6 is denoted by INLINEFORM7 . The features given values by the model INLINEFORM8 are the domain INLINEFORM9 of the model. In a language model, a value INLINEFORM10 for the feature INLINEFORM11 is denoted by INLINEFORM12 . For each potential language INLINEFORM13 of a corpus INLINEFORM14 in an unknown language, a resulting score INLINEFORM15 is calculated. A corpus in an unknown language is also referred to as a test document." ], [ "The design of a supervised language identifier can generally be deconstructed into four key steps:", "A representation of text is selected", "A model for each language is derived from a training corpus of labelled documents", "A function is defined that determines the similarity between a document and each language", "The language of a document is predicted based on the highest-scoring model" ], [ "The theoretical description of some of the methods leaves room for interpretation on how to implement them. BIBREF28 define an algorithm to be any well-defined computational procedure. BIBREF29 introduces a three-tiered classification where programs implement algorithms and algorithms implement functions. The examples of functions given by BIBREF29 , sort and find max differ from our identify language as they are always solvable and produce the same results. In this survey, we have considered two methods to be the same if they always produce exactly the same results from exactly the same inputs. This would not be in line with the definition of an algorithm by BIBREF29 , as in his example there are two different algorithms mergesort and quicksort that implement the function sort, always producing identical results with the same input. What we in this survey call a method, is actually a function in the tiers presented by BIBREF29 ." ], [ "In this section, we present an extensive list of features used in , some of which are not self-evident. The equations written in the unified notation defined earlier show how the values INLINEFORM0 used in the language models are calculated from the tokens INLINEFORM1 . For each feature type, we generally introduce the first published article that used that feature type, as well as more recent articles where the feature type has been considered." ], [ "In , text is typically modeled as a stream of characters. However, there is a slight mismatch between this view and how text is actually stored: documents are digitized using a particular encoding, which is a mapping from characters (e.g. a character in an alphabet), onto the actual sequence of bytes that is stored and transmitted by computers. Encodings vary in how many bytes they use to represent each character. Some encodings use a fixed number of bytes for each character (e.g. ASCII), whereas others use a variable-length encoding (e.g. UTF-8). Some encodings are specific to a given language (e.g. GuoBiao 18030 or Big5 for Chinese), whereas others are specifically designed to represent as many languages as possible (e.g. the Unicode family of encodings). Languages can often be represented in a number of different encodings (e.g. UTF-8 and Shift-JIS for Japanese), and sometimes encodings are specifically designed to share certain codepoints (e.g. all single-byte UTF-8 codepoints are exactly the same as ASCII). Most troubling for , isomorphic encodings can be used to encode different languages, meaning that the determination of the encoding often doesn't help in honing in on the language. Infamous examples of this are the ISO-8859 and EUC encoding families. Encodings pose unique challenges for practical applications: a given language can often be encoded in different forms, and a given encoding can often map onto multiple languages.", "Some research has included an explicit encoding detection step to resolve bytes to the characters they represent BIBREF30 , effectively transcoding the document into a standardized encoding before attempting to identify the language. However, transcoding is computationally expensive, and other research suggests that it may be possible to ignore encoding and build a single per-language model covering multiple encodings simultaneously BIBREF31 , BIBREF32 . Another solution is to treat each language-encoding pair as a separate category BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . The disadvantage of this is that it increases the computational cost by modeling a larger number of classes. Most of the research has avoided issues of encoding entirely by assuming that all documents use the same encoding BIBREF37 . This may be a reasonable assumption in some settings, such as when processing data from a single source (e.g. all data from Twitter and Wikipedia is UTF-8 encoded). In practice, a disadvantage of this approach may be that some encodings are only applicable to certain languages (e.g. S-JIS for Japanese and Big5 for Chinese), so knowing that a document is in a particular encoding can provide information that would be lost if the document is transcoded to a universal encoding such as UTF-8. BIBREF38 used a parallel state machine to detect which encoding scheme a file could potentially have been encoded with. The knowledge of the encoding, if detected, is then used to narrow down the possible languages.", "Most features and methods do not make a distinction between bytes or characters, and because of this we will present feature and method descriptions in terms of characters, even if byte tokenization was actually used in the original research." ], [ "In this section, we review how individual character tokens have been used as features in .", " BIBREF39 used the formatting of numbers when distinguishing between Malay and Indonesian. BIBREF40 used the presence of non-alphabetic characters between the current word and the words before and after as features. BIBREF41 used emoticons (or emojis) in Arabic dialect identification with Naive Bayes (“NB”; see product). Non-alphabetic characters have also been used by BIBREF42 , BIBREF43 , BIBREF44 , and BIBREF45 .", " BIBREF46 used knowledge of alphabets to exclude languages where a language-unique character in a test document did not appear. BIBREF47 used alphabets collected from dictionaries to check if a word might belong to a language. BIBREF48 used the Unicode database to get the possible languages of individual Unicode characters. Lately, the knowledge of relevant alphabets has been used for also by BIBREF49 and BIBREF44 .", "Capitalization is mostly preserved when calculating character frequencies, but in contexts where it is possible to identify the orthography of a given document and where capitalization exists in the orthography, lowercasing can be used to reduce sparseness. In recent work, capitalization was used as a special feature by BIBREF42 , BIBREF43 , and BIBREF45 .", " BIBREF50 was the first to use the length of words in . BIBREF51 used the length of full person names comprising several words. Lately, the number of characters in words has been used for by BIBREF52 , BIBREF53 , BIBREF44 , and BIBREF45 . BIBREF52 also used the length of the two preceding words.", " BIBREF54 used character frequencies as feature vectors. In a feature vector, each feature INLINEFORM0 has its own integer value. The raw frequency – also called term frequency (TF) – is calculated for each language INLINEFORM1 as: DISPLAYFORM0 ", " BIBREF20 was the first to use the probability of characters. He calculated the probabilities as relative frequencies, by dividing the frequency of a feature found in the corpus by the total count of features of the same type in the corpus. When the relative frequency of a feature INLINEFORM0 is used as a value, it is calculated for each language INLINEFORM1 as: DISPLAYFORM0 ", " BIBREF55 calculated the relative frequencies of one character prefixes, and BIBREF56 did the same for one character suffixes.", " BIBREF57 calculated character frequency document frequency (“LFDF”) values. BIBREF58 compared their own Inverse Class Frequency (“ICF”) method with the Arithmetic Average Centroid (“AAC”) and the Class Feature Centroid (“CFC”) feature vector updating methods. In ICF a character appearing frequently only in some language gets more positive weight for that language. The values differ from Inverse Document Frequency (“IDF”, artemenko1), as they are calculated using also the frequencies of characters in other languages. Their ICF-based vectors generally performed better than those based on AAC or CFC. BIBREF59 explored using the relative frequencies of characters with similar discriminating weights. BIBREF58 also used Mutual Information (“MI”) and chi-square weighting schemes with characters.", " BIBREF32 compared the identification results of single characters with the use of character bigrams and trigrams when classifying over 67 languages. Both bigrams and trigrams generally performed better than unigrams. BIBREF60 also found that the identification results from identifiers using just characters are generally worse than those using character sequences." ], [ "In this section we consider the different combinations of characters used in the literature. Character mostly consist of all possible characters in a given encoding, but can also consist of only alphabetic or ideographic characters.", " BIBREF56 calculated the co-occurrence ratios of any two characters, as well as the ratio of consonant clusters of different sizes to the total number of consonants. BIBREF61 used the combination of every bigram and their counts in words. BIBREF53 used the proportions of question and exclamation marks to the total number of the end of sentence punctuation as features with several machine learning algorithms.", " BIBREF62 used FastText to generate character n-gram embeddings BIBREF63 . Neural network generated embeddings are explained in cooccurrencesofwords.", " BIBREF20 used the relative frequencies of vowels following vowels, consonants following vowels, vowels following consonants and consonants following consonants. BIBREF52 used vowel-consonant ratios as one of the features with Support Vector Machines (“SVMs”, supportvectormachines), Decision Trees (“DTs”, decisiontrees), and Conditional Random Fields (“CRFs”, openissues:short).", " BIBREF41 used the existence of word lengthening effects and repeated punctuation as features. BIBREF64 used the presence of characters repeating more than twice in a row as a feature with simple scoring (simple1). BIBREF65 used more complicated repetitions identified by regular expressions. BIBREF66 used letter and character bigram repetition with a CRF. BIBREF67 used the count of character sequences with three or more identical characters, using several machine learning algorithms.", "Character are continuous sequences of characters of length INLINEFORM0 . They can be either consecutive or overlapping. Consecutive character bigrams created from the four character sequence door are do and or, whereas the overlapping bigrams are do, oo, and or. Overlapping are most often used in the literature. Overlapping produces a greater number and variety of from the same amount of text.", " BIBREF20 was the first to use combinations of any two characters. He calculated the relative frequency of each bigram. RFTable2 lists more recent articles where relative frequencies of of characters have been used. BIBREF20 also used the relative frequencies of two character combinations which had one unknown character between them, also known as gapped bigrams. BIBREF68 used a modified relative frequency of character unigrams and bigrams.", "Character trigram frequencies relative to the word count were used by BIBREF92 , who calculated the values INLINEFORM0 as in vega1. Let INLINEFORM1 be the word-tokenized segmentation of the corpus INLINEFORM2 of character tokens, then: DISPLAYFORM0 ", "where INLINEFORM0 is the count of character trigrams INLINEFORM1 in INLINEFORM2 , and INLINEFORM3 is the total word count in the corpus. Later frequencies relative to the word count were used by BIBREF93 for character bigrams and trigrams.", " BIBREF25 divided characters into five phonetic groups and used a Markovian method to calculate the probability of each bigram consisting of these phonetic groups. In Markovian methods, the probability of a given character INLINEFORM0 is calculated relative to a fixed-size character context INLINEFORM1 in corpus INLINEFORM2 , as follows: DISPLAYFORM0 ", "where INLINEFORM0 is an prefix of INLINEFORM1 of length INLINEFORM2 . In this case, the probability INLINEFORM3 is the value INLINEFORM4 , where INLINEFORM5 , in the model INLINEFORM6 . BIBREF94 used 4-grams with recognition weights which were derived from Markovian probabilities. MarkovianTable lists some of the more recent articles where Markovian character have been used.", " BIBREF110 was the first author to propose a full-fledged probabilistic language identifier. He defines the probability of a trigram INLINEFORM0 being written in the language INLINEFORM1 to be: DISPLAYFORM0 ", "He considers the prior probabilities of each language INLINEFORM0 to be equal, which leads to: DISPLAYFORM0 ", " BIBREF110 used the probabilities INLINEFORM0 as the values INLINEFORM1 in the language models.", " BIBREF111 used a list of the most frequent bigrams and trigrams with logarithmic weighting. BIBREF112 was the first to use direct frequencies of character as feature vectors. BIBREF113 used Principal Component Analysis (“PCA”) to select only the most discriminating bigrams in the feature vectors representing languages. BIBREF114 used the most frequent and discriminating byte unigrams, bigrams, and trigrams among their feature functions. They define the most discriminating features as those which have the most differing relative frequencies between the models of the different languages. BIBREF115 tested from two to five using frequencies as feature vectors, frequency ordered lists, relative frequencies, and Markovian probabilities. FrequencyVectorTable lists the more recent articles where the frequency of character have been used as features. In the method column, “RF” refers to Random Forest (cf. decisiontrees), “LR” to Logistic Regression (discriminantfunctions), “KRR” to Kernel Ridge Regression (vectors), “KDA” to Kernel Discriminant Analysis (vectors), and “NN” to Neural Networks (neuralnetworks).", " BIBREF47 used the last two and three characters of open class words. BIBREF34 used an unordered list of distinct trigrams with the simple scoring method (Simplescoring). BIBREF132 used Fisher's discriminant function to choose the 1000 most discriminating trigrams. BIBREF133 used unique 4-grams of characters with positive Decision Rules (Decisionrule). BIBREF134 used the frequencies of bi- and trigrams in words unique to a language. BIBREF135 used lists of the most frequent trigrams.", " BIBREF38 divided possible character bigrams into those that are commonly used in a language and to those that are not. They used the ratio of the commonly used bigrams to all observed bigrams to give a confidence score for each language. BIBREF136 used the difference between the ISO Latin-1 code values of two consecutive characters as well as two characters separated by another character, also known as gapped character bigrams.", " BIBREF137 used the IDF and the transition probability of trigrams. They calculated the IDF values INLINEFORM0 of trigrams INLINEFORM1 for each language INLINEFORM2 , as in artemenko1, where INLINEFORM3 is the number of trigrams INLINEFORM4 in the corpus of the language INLINEFORM5 and INLINEFORM6 is the number of languages in which the trigram INLINEFORM7 is found, where INLINEFORM8 is the language-segmented training corpus with each language as a single segment. DISPLAYFORM0 ", " INLINEFORM0 is defined as: DISPLAYFORM0 ", " BIBREF138 used from one to four, which were weighted with “TF-IDF” (Term Frequency–Inverse Document Frequency). TF-IDF was calculated as: DISPLAYFORM0 ", "TF-IDF weighting or close variants have been widely used for . BIBREF139 used “CF-IOF” (Class Frequency-Inverse Overall Frequency) weighted 3- and 4-grams.", " BIBREF140 used the logarithm of the ratio of the counts of character bigrams and trigrams in the English and Hindi dictionaries. BIBREF141 used a feature weighting scheme based on mutual information (“MI”). They also tried weighting schemes based on the “GSS” (Galavotti, Sebastiani, and Simi) and “NGL” (Ng, Goh, and Low) coefficients, but using the MI-based weighting scheme proved the best in their evaluations when they used the sum of values method (sumvalues1). BIBREF67 used punctuation trigrams, where the first character has to be a punctuation mark (but not the other two characters). BIBREF142 used consonant bi- and trigrams which were generated from words after the vowels had been removed.", "The language models mentioned earlier consisted only of of the same size INLINEFORM0 . If from one to four were used, then there were four separate language models. BIBREF7 created ordered lists of the most frequent for each language. BIBREF143 used similar lists with symmetric cross-entropy. BIBREF144 used a Markovian method to calculate the probability of byte trigrams interpolated with byte unigrams. BIBREF145 created a language identifier based on character of different sizes over 281 languages, and obtained an identification accuracy of 62.8% for extremely short samples (5–9 characters). Their language identifier was used or evaluated by BIBREF146 , BIBREF147 , and BIBREF148 . BIBREF146 managed to improve the identification results by feeding the raw language distance calculations into an SVM.", "DifferingNgramTable3 lists recent articles where character of differing sizes have been used. “LR” in the methods column refer to Logistic Regression (maxent), “LSTM RNN” to Long Short-Term Memory Recurrent Neural Networks (neuralnetworks), and “DAN” to Deep Averaging Networks (neuralnetworks). BIBREF30 used up to the four last characters of words and calculated their relative frequencies. BIBREF149 used frequencies of 2–7-grams, normalized relative to the total number of in all the language models as well as the current language model. BIBREF60 compared the use of different sizes of in differing combinations, and found that combining of differing sizes resulted in better identification scores. BIBREF150 , BIBREF151 , BIBREF152 used mixed length domain-independent language models of byte from one to three or four.", "Mixed length language models were also generated by BIBREF36 and later by BIBREF153 , BIBREF101 , who used the most frequent and discriminating longer than two bytes, up to a maximum of 12 bytes, based on their weighted relative frequencies. INLINEFORM0 of the most frequent were extracted from training corpora for each language, and their relative frequencies were calculated. In the tests reported in BIBREF153 , INLINEFORM1 varied from 200 to 3,500 . Later BIBREF154 also evaluated different combinations of character as well as their combinations with words.", " BIBREF155 used mixed-order frequencies relative to the total number of in the language model. BIBREF61 used frequencies of from one to five and gapped 3- and 4-grams as features with an SVM. As an example, some gapped 4-grams from the word Sterneberg would be Senb, tree, enbr, and reeg. BIBREF156 used character as a backoff from Markovian word . BIBREF157 used the frequencies of word initial ranging from 3 to the length of the word minus 1. BIBREF158 used the most relevant selected using the absolute value of the Pearson correlation. BIBREF159 used only the first 10 characters from a longer word to generate the , while the rest were ignored. BIBREF160 used only those which had the highest TF-IDF scores. BIBREF43 used character weighted by means of the “BM25” (Best Match 25) weighting scheme. BIBREF161 used byte up to length 25.", " BIBREF61 used consonant sequences generated from words. BIBREF189 used the presence of vowel sequences as a feature with a NB classifier (see naivebayes) when distinguishing between English and transliterated Indian languages.", " BIBREF190 used a basic dictionary (basicdictionary) composed of the 400 most common character 4-grams.", " BIBREF46 and BIBREF110 used character combinations (of different sizes) that either existed in only one language or did not exist in one or more languages." ], [ " BIBREF191 used the suffixes of lexical words derived from untagged corpora. BIBREF192 used prefixes and suffixes determined using linguistic knowledge of the Arabic language. BIBREF193 used suffixes and prefixes in rule-based . BIBREF134 used morphemes and morpheme trigrams (morphotactics) constructed by Creutz's algorithm BIBREF194 . BIBREF195 used prefixes and suffixes constructed by his own algorithm, which was later also used by BIBREF196 . BIBREF197 used morpheme lexicons in . BIBREF196 compared the use of morphological features with the use of variable sized character . When choosing between ten European languages, the morphological features obtained only 26.0% accuracy while the reached 82.7%. BIBREF198 lemmatized Malay words in order to get the base forms. BIBREF199 used a morphological analyzer of Arabic. BIBREF70 used morphological information from a part-of-speech (POS) tagger. BIBREF189 and BIBREF64 used manually selected suffixes as features. BIBREF200 created morphological grammars to distinguish between Croatian and Serbian. BIBREF201 used morphemes created by Morfessor, but they also used manually created morphological rules. BIBREF102 used a suffix module containing the most frequent suffixes. BIBREF202 and BIBREF159 used word suffixes as features with CRFs. BIBREF119 used an unsupervised method to learn morphological features from training data. The method collects candidate affixes from a dictionary built using the training data. If the remaining part of a word is found from the dictionary after removing a candidate affix, the candidate affix is considered to be a morpheme. BIBREF119 used 5% of the most frequent affixes in language identification. BIBREF183 used character classified into different types, which included prefixes and suffixes. PrefixSuffixTable lists some of the more recent articles where prefixes and suffixes collected from a training corpus has been used for .", " BIBREF206 used trigrams composed of syllables. BIBREF198 used Markovian syllable bigrams for between Malay and English. Later BIBREF207 also experimented with syllable uni- and trigrams. BIBREF114 used the most frequent as well as the most discriminating Indian script syllables, called aksharas. They used single aksharas, akshara bigrams, and akshara trigrams. Syllables would seem to be especially apt in situations where distinction needs to be made between two closely-related languages.", " BIBREF96 used the trigrams of non-syllable chunks that were based on MI. BIBREF198 experimented also with Markovian bigrams using both character and grapheme bigrams, but the syllable bigrams proved to work better. Graphemes in this case are the minimal units of the writing system, where a single character may be composed of several graphemes (e.g. in the case of the Hangul or Thai writing systems). Later, BIBREF207 also used grapheme uni- and trigrams. BIBREF207 achieved their best results combining word unigrams and syllable bigrams with a grapheme back-off. BIBREF208 used the MADAMIRA toolkit for D3 decliticization and then used D3-token 5-grams. D3 decliticization is a way to preprocess Arabic words presented by BIBREF209 .", "Graphones are sequences of characters linked to sequences of corresponding phonemes. They are automatically deduced from a bilingual corpus which consists of words and their correct pronunciations using Joint Sequence Models (“JSM”). BIBREF210 used language tags instead of phonemes when generating the graphones and then used Markovian graphone from 1 to 8 in ." ], [ " BIBREF211 used the position of the current word in word-level . The position of words in sentences has also been used as a feature in code-switching detection by BIBREF52 . It had predictive power greater than the language label or length of the previous word.", " BIBREF18 used the characteristics of words as parts of discriminating functions. BIBREF212 used the string edit distance and overlap between the word to be identified and words in dictionaries. Similarly BIBREF140 used a modified edit distance, which considers the common spelling substitutions when Hindi is written using latin characters. BIBREF213 used the Minimum Edit Distance (“MED”).", "Basic dictionaries are unordered lists of words belonging to a language. Basic dictionaries do not include information about word frequency, and are independent of the dictionaries of other languages. BIBREF110 used a dictionary for as a part of his speech synthesizer. Each word in a dictionary had only one possible “language”, or pronunciation category. More recently, a basic dictionary has been used for by BIBREF214 , BIBREF52 , and BIBREF90 .", "Unique word dictionaries include only those words of the language, that do not belong to the other languages targeted by the language identifier. BIBREF215 used unique short words (from one to three characters) to differentiate between languages. Recently, a dictionary of unique words was used for by BIBREF116 , BIBREF216 , and BIBREF67 .", " BIBREF47 used exhaustive lists of function words collected from dictionaries. BIBREF217 used stop words – that is non-content or closed-class words – as a training corpus. Similarly, BIBREF218 used words from closed word classes, and BIBREF97 used lists of function words. BIBREF219 used a lexicon of Arabic words and phrases that convey modality. Common to these features is that they are determined based on linguistic knowledge.", " BIBREF220 used the most relevant words for each language. BIBREF221 used unique or nearly unique words. BIBREF80 used Information Gain Word-Patterns (“IG-WP”) to select the words with the highest information gain.", " BIBREF222 made an (unordered) list of the most common words for each language, as, more recently, did BIBREF223 , BIBREF83 , and BIBREF85 . BIBREF224 encoded the most common words to root forms with the Soundex algorithm.", " BIBREF225 collected the frequencies of words into feature vectors. BIBREF112 compared the use of character from 2 to 5 with the use of words. Using words resulted in better identification results than using character bigrams (test document sizes of 20, 50, 100 or 200 characters), but always worse than character 3-, 4- or 5-grams. However, the combined use of words and character 4-grams gave the best results of all tested combinations, obtaining 95.6% accuracy for 50 character sequences when choosing between 13 languages. BIBREF158 used TF-IDF scores of words to distinguish between language groups. Recently, the frequency of words has also been used for by BIBREF180 , BIBREF183 , BIBREF129 , and BIBREF142 .", " BIBREF226 and BIBREF227 were the first to use relative frequencies of words in . As did BIBREF112 for word frequencies, also BIBREF60 found that combining the use of character with the use of words provided the best results. His language identifier obtained 99.8% average recall for 50 character sequences for the 10 evaluated languages (choosing between the 13 languages known by the language identifier) when using character from 1 to 6 combined with words. BIBREF98 calculated the relative frequency of words over all the languages. BIBREF137 calculated the IDF of words, following the approach outlined in artemenko1. BIBREF177 calculated the Pointwise Mutual Information (“PMI”) for words and used it to group words to Chinese dialects or dialect groups. Recently, the relative frequency of words has also been used for by BIBREF184 , BIBREF148 and BIBREF91 ", " BIBREF228 used the relative frequency of words with less than six characters. Recently, BIBREF83 also used short words, as did BIBREF45 .", " BIBREF229 used the relative frequency calculated from Google searches. Google was later also used by BIBREF96 and BIBREF230 .", " BIBREF231 created probability maps for words for German dialect identification between six dialects. In a word probability map, each predetermined geographic point has a probability for each word form. Probabilities were derived using a linguistic atlas and automatically-induced dialect lexicons.", " BIBREF232 used commercial spelling checkers, which utilized lexicons and morphological analyzers. The language identifier of BIBREF232 obtained 97.9% accuracy when classifying one-line texts between 11 official South African languages. BIBREF233 used the ALMORGEANA analyzer to check if the word had an analysis in modern standard Arabic. They also used sound change rules to use possible phonological variants with the analyzer. BIBREF234 used spellchecking and morphological analyzers to detect English words from Hindi–English mixed search queries. BIBREF235 used spelling checkers to distinguish between 15 languages, extending the work of BIBREF232 with dynamic model selection in order to gain better performance. BIBREF157 used a similarity count to find if mystery words were misspelled versions of words in a dictionary.", " BIBREF236 used an “LBG-VQ” (Linde, Buzo & Gray algorithm for Vector Quantization) approach to design a codebook for each language BIBREF237 . The codebook contained a predetermined number of codevectors. Each codeword represented the word it was generated from as well as zero or more words close to it in the vector space." ], [ " BIBREF41 used the number of words in a sentence with NB. BIBREF53 and BIBREF45 used the sentence length calculated in both words and characters with several machine learning algorithms.", " BIBREF53 used the ratio to the total number of words of: once-occurring words, twice-occurring words, short words, long words, function words, adjectives and adverbs, personal pronouns, and question words. They also used the word-length distribution for words of 1–20 characters.", " BIBREF193 used at least the preceding and proceeding words with manual rules in word-level for text-to-speech synthesis. BIBREF238 used Markovian word with a Hidden Markov Model (“HMM”) tagger (othermethods). WordNgramTable lists more recent articles where word or similar constructs have been used. “PPM” in the methods column refers to Prediction by Partial Matching (smoothing), and “kNN” to INLINEFORM0 Nearest Neighbor classification (ensemble).", " BIBREF239 used word trigrams simultaneously with character 4-grams. He concluded that word-based models can be used to augment the results from character when they are not providing reliable identification results. WordCharacterNgramTable lists articles where both character and word have been used together. “CBOW” in the methods column refer to Continuous Bag of Words neural network (neuralnetworks), and “MIRA” to Margin Infused Relaxed Algorithm (supportvectormachines). BIBREF154 evaluated different combinations of word and character with SVMs. The best combination for language variety identification was using all the features simultaneously. BIBREF187 used normal and gapped word and character simultaneously.", " BIBREF240 uses word embeddings consisting of Positive Pointwise Mutual Information (“PPMI”) counts to represent each word type. Then they use Truncated Singular Value Decomposition (“TSVD”) to reduce the dimension of the word vectors to 100. BIBREF241 used INLINEFORM0 -means clustering when building dialectal Arabic corpora. BIBREF242 used features provided by Latent Semantic Analysis (“LSA”) with SVMs and NB.", " BIBREF243 present two models, the CBOW model and the continuous skip-gram model. The CBOW model can be used to generate a word given it's context and the skip-gram model can generate the context given a word. The projection matrix, which is the weight matrix between the input layer and the hidden layer, can be divided into vectors, one vector for each word in the vocabulary. These word-vectors are also referred to as word embeddings. The embeddings can be used as features in other tasks after the neural network has been trained. BIBREF244 , BIBREF245 , BIBREF80 , BIBREF246 , BIBREF247 , BIBREF248 , BIBREF62 , and BIBREF130 used word embeddings generated by the word2vec skip-gram model BIBREF243 as features in . BIBREF249 used word2vec word embeddings and INLINEFORM0 -means clustering. BIBREF250 , BIBREF251 , and BIBREF44 also used word embeddings created with word2vec.", " BIBREF167 trained both character and word embeddings using FastText text classification method BIBREF63 on the Discriminating between Similar Languages (“DSL”) 2016 shared task, where it reached low accuracy when compared with the other methods. BIBREF205 used FastText to train word vectors including subword information. Then he used these word vectors together with some additional word features to train a CRF-model which was used for codeswitching detection.", " BIBREF212 extracted features from the hidden layer of a Recurrent Neural Network (“RNN”) that had been trained to predict the next character in a string. They used the features with a SVM classifier.", " BIBREF229 evaluated methods for detecting foreign language inclusions and experimented with a Conditional Markov Model (“CMM”) tagger, which had performed well on Named Entity Recognition (“NER”). BIBREF229 was able to produce the best results by incorporating her own English inclusion classifier's decision as a feature for the tagger, and not using the taggers POS tags. BIBREF197 used syntactic parsers together with dictionaries and morpheme lexicons. BIBREF278 used composed of POS tags and function words. BIBREF173 used labels from a NER system, cluster prefixes, and Brown clusters BIBREF279 . BIBREF214 used POS tag from one to three and BIBREF43 from one to five, and BIBREF67 used POS tag trigrams with TF-IDF weighting. BIBREF203 , BIBREF42 , BIBREF53 , and BIBREF45 have also recently used POS tags. BIBREF80 used POS tags with emotion-labeled graphs in Spanish variety identification. In emotion-labeled graphs, each POS-tag was connected to one or more emotion nodes if a relationship between the original word and the emotion was found from the Spanish Emotion Lexicon. They also used POS-tags with IG-WP. BIBREF208 used the MADAMIRA tool for morphological analysis disambiguation. The polySVOX text analysis module described by BIBREF197 uses two-level rules and morpheme lexicons on sub-word level and separate definite clause grammars (DCGs) on word, sentence, and paragraph levels. The language of sub-word units, words, sentences, and paragraphs in multilingual documents is identified at the same time as performing syntactic analysis for the document. BIBREF280 converted sentences into POS-tag patterns using a word-POS dictionary for Malay. The POS-tag patterns were then used by a neural network to indicate whether the sentences were written in Malay or not. BIBREF281 used Jspell to detect differences in the grammar of Portuguese variants. BIBREF200 used a syntactic grammar to recognize verb-da-verb constructions, which are characteristic of the Serbian language. The syntactic grammar was used together with several morphological grammars to distinguish between Croatian and Serbian.", " BIBREF193 used the weighted scores of the words to the left and right of the word to be classified. BIBREF238 used language labels within an HMM. BIBREF282 used the language labels of other words in the same sentence to determine the language of the ambiguous word. The languages of the other words had been determined by the positive Decision Rules (Decisionrule), using dictionaries of unique words when possible. BIBREF213 , BIBREF71 used the language tags of the previous three words with an SVM. BIBREF283 used language labels of surrounding words with NB. BIBREF82 used the language probabilities of the previous word to determining weights for languages. BIBREF156 used unigram, bigram and trigram language label transition probabilities. BIBREF284 used the language labels for the two previous words as well as knowledge of whether code-switching had already been detected or not. BIBREF285 used the language label of the previous word to determine the language of an ambiguous word. BIBREF286 also used the language label of the previous word. BIBREF287 used the language identifications of 2–4 surrounding words for post-identification correction in word-level . BIBREF109 used language labels with a CRF. BIBREF52 used language labels of the current and two previous words in code-switching point prediction. Their predictive strength was lower than the count of code-switches, but better than the length or position of the word. All of the features were used together with NB, DT and SVM. BIBREF288 used language label bigrams with an HMM. BIBREF41 used the word-level language labels obtained with the approach of BIBREF289 on sentence-level dialect identification." ], [ "Feature smoothing is required in order to handle the cases where not all features INLINEFORM0 in a test document have been attested in the training corpora. Thus, it is used especially when the count of features is high, or when the amount of training data is low. Smoothing is usually handled as part of the method, and not pre-calculated into the language models. Most of the smoothing methods evaluated by BIBREF290 have been used in , and we follow the order of methods in that article.", "In Laplace smoothing, an extra number of occurrences is added to every possible feature in the language model. BIBREF291 used Laplace's sample size correction (add-one smoothing) with the product of Markovian probabilities. BIBREF292 experimented with additive smoothing of 0.5, and noted that it was almost as good as Good-Turing smoothing. BIBREF290 calculate the values for each as: DISPLAYFORM0 ", "where INLINEFORM0 is the probability estimate of INLINEFORM1 in the model and INLINEFORM2 its frequency in the training corpus. INLINEFORM3 is the total number of of length INLINEFORM4 and INLINEFORM5 the number of distinct in the training corpus. INLINEFORM6 is the Lidstone smoothing parameter. When using Laplace smoothing, INLINEFORM7 is equal to 1 and with Lidstone smoothing, the INLINEFORM8 is usually set to a value between 0 and 1.", "The penalty values used by BIBREF170 with the HeLI method function as a form of additive smoothing. BIBREF145 evaluated additive, Katz, absolute discounting, and Kneser-Ney smoothing methods. Additive smoothing produced the least accurate results of the four methods. BIBREF293 and BIBREF258 evaluated NB with several different Lidstone smoothing values. BIBREF107 used additive smoothing with character as a baseline classifier, which they were unable to beat with Convolutional Neural Networks (“CNNs”).", " BIBREF292 used Good-Turing smoothing with the product of Markovian probabilities. BIBREF290 define the Good-Turing smoothed count INLINEFORM0 as: DISPLAYFORM0 ", "where INLINEFORM0 is the number of features occurring exactly INLINEFORM1 times in the corpus INLINEFORM2 . Lately Good-Turing smoothing has been used by BIBREF294 and BIBREF88 .", " BIBREF220 used Jelinek-Mercer smoothing correction over the relative frequencies of words, calculated as follows: DISPLAYFORM0 ", "where INLINEFORM0 is a smoothing parameter, which is usually some small value like 0.1. BIBREF105 used character 1–8 grams with Jelinek-Mercer smoothing. Their language identifier using character 5-grams achieved 3rd place (out of 12) in the TweetLID shared task constrained track.", " BIBREF95 and BIBREF145 used the Katz back-off smoothing BIBREF295 from the SRILM toolkit, with perplexity. Katz smoothing is an extension of Good-Turing discounting. The probability mass left over from the discounted is then distributed over unseen via a smoothing factor. In the smoothing evaluations by BIBREF145 , Katz smoothing performed almost as well as absolute discounting, which produced the best results. BIBREF296 evaluated Witten-Bell, Katz, and absolute discounting smoothing methods. Witten-Bell got 87.7%, Katz 87.5%, and absolute discounting 87.4% accuracy with character 4-grams.", " BIBREF297 used the PPM-C algorithm for . PPM-C is basically a product of Markovian probabilities with an escape scheme. If an unseen context is encountered for the character being processed, the escape probability is used together with a lower-order model probability. In PPM-C, the escape probability is the sum of the seen contexts in the language model. PPM-C was lately used by BIBREF165 . The PPM-D+ algorithm was used by BIBREF298 . BIBREF299 and BIBREF300 used a PPM-A variant. BIBREF301 also used PPM. The language identifier of BIBREF301 obtained 91.4% accuracy when classifying 100 character texts between 277 languages. BIBREF302 used Witten-Bell smoothing with perplexity.", " BIBREF303 used a Chunk-Based Language Model (“CBLM”), which is similar to PPM models.", " BIBREF145 used several smoothing techniques with Markovian probabilities. Absolute discounting from the VariKN toolkit performed the best. BIBREF145 define the smoothing as follows: a constant INLINEFORM0 is subtracted from the counts INLINEFORM1 of all observed INLINEFORM2 and the held-out probability mass is distributed between the unseen in relation to the probabilities of lower order INLINEFORM3 , as follows: DISPLAYFORM0 ", "where INLINEFORM0 is a scaling factor that makes the conditional distribution sum to one. Absolute discounting with Markovian probabilities from the VariKN toolkit was later also used by BIBREF146 , BIBREF147 , and BIBREF148 .", "The original Kneser-Ney smoothing is based on absolute discounting with an added back-off function to lower-order models BIBREF145 . BIBREF290 introduced a modified version of the Kneser-Ney smoothing using interpolation instead of back-off. BIBREF304 used the Markovian probabilities with Witten-Bell and modified Kneser-Ney smoothing. BIBREF88 , BIBREF166 , and BIBREF261 also recently used modified Kneser-Ney discounting. BIBREF119 used both original and modified Kneser-Ney smoothings. In the evaluations of BIBREF145 , Kneser-Ney smoothing fared better than additive, but somewhat worse than the Katz and absolute discounting smoothing. Lately BIBREF109 also used Kneser-Ney smoothing.", " BIBREF86 , BIBREF87 evaluated several smoothing techniques with character and word : Laplace/Lidstone, Witten-Bell, Good-Turing, and Kneser-Ney. In their evaluations, additive smoothing with 0.1 provided the best results. Good-Turing was not as good as additive smoothing, but better than Witten-Bell and Kneser-Ney smoothing. Witten-Bell proved to be clearly better than Kneser-Ney." ], [ "In recent years there has been a tendency towards attempting to combine several different types of features into one classifier or classifier ensemble. Many recent studies use readily available classifier implementations and simply report how well they worked with the feature set used in the context of their study. There are many methods presented in this article that are still not available as out of the box implementations, however. There are many studies which have not been re-evaluated at all, going as far back as BIBREF18 . Our hope is that this article will inspire new studies and many previously unseen ways of combining features and methods. In the following sections, the reviewed articles are grouped by the methods used for ." ], [ " BIBREF46 used a positive Decision Rules with unique characters and character , that is, if a unique character or character was found, the language was identified. The positive Decision Rule (unique features) for the test document INLINEFORM0 and the training corpus INLINEFORM1 can be formulated as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the set of unique features in INLINEFORM1 , INLINEFORM2 is the corpus for language INLINEFORM3 , and INLINEFORM4 is a corpus of any other language INLINEFORM5 . Positive decision rules can also be used with non-unique features when the decisions are made in a certain order. For example, BIBREF52 presents the pseudo code for her dictionary lookup tool, where these kind of decisions are part of an if-then-else statement block. Her (manual) rule-based dictionary lookup tool works better for Dutch–English code-switching detection than the SVM, DT, or CRF methods she experiments with. The positive Decision Rule has also been used recently by BIBREF85 , BIBREF190 , BIBREF287 , BIBREF216 , BIBREF305 , BIBREF169 , and BIBREF214 .", "In the negative Decision Rule, if a character or character combination that was found in INLINEFORM0 does not exist in a particular language, that language is omitted from further identification. The negative Decision Rule can be expressed as: DISPLAYFORM0 ", "where INLINEFORM0 is the corpus for language INLINEFORM1 . The negative Decision Rule was first used by BIBREF47 in .", " BIBREF118 evaluated the JRIP classifier from the Waikato Environment for Knowledge Analysis (“WEKA”). JRIP is an implementation of the propositional rule learner. It was found to be inferior to the SVM, NB and DT algorithms.", "In isolation the desicion rules tend not to scale well to larger numbers of languages (or very short test documents), and are thus mostly used in combination with other methods or as a Decision Tree." ], [ " BIBREF306 were the earliest users of Decision Trees (“DT”) in . They used DT based on characters and their context without any frequency information. In training the DT, each node is split into child nodes according to an information theoretic optimization criterion. For each node a feature is chosen, which maximizes the information gain at that node. The information gain is calculated for each feature and the feature with the highest gain is selected for the node. In the identification phase, the nodes are traversed until only one language is left (leaf node). Later, BIBREF196 , BIBREF307 , and BIBREF308 have been especially successful in using DTs.", "Random Forest (RF) is an ensemble classifier generating many DTs. It has been succesfully used in by BIBREF140 , BIBREF201 , BIBREF309 , and BIBREF185 , BIBREF172 ." ], [ "In simple scoring, each feature in the test document is checked against the language model for each language, and languages which contain that feature are given a point, as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the INLINEFORM1 th feature found in the test document INLINEFORM2 . The language scoring the most points is the winner. Simple scoring is still a good alternative when facing an easy problem such as preliminary language group identification. It was recently used for this purpose by BIBREF246 with a basic dictionary. They achieved 99.8% accuracy when identifying between 6 language groups. BIBREF310 use a version of simple scoring as a distance measure, assigning a penalty value to features not found in a model. In this version, the language scoring the least amount of points is the winner. Their language identifier obtained 100% success rate with character 4-grams when classifying relatively large documents (from 1 to 3 kilobytes), between 10 languages. Simple scoring was also used lately by BIBREF166 , BIBREF311 , and BIBREF90 ." ], [ "The sum of values can be expressed as: DISPLAYFORM0 ", "where INLINEFORM0 is the INLINEFORM1 th feature found in the test document INLINEFORM2 , and INLINEFORM3 is the value for the feature in the language model of the language INLINEFORM4 . The language with the highest score is the winner.", "The simplest case of sumvalues1 is when the text to be identified contains only one feature. An example of this is BIBREF157 who used the frequencies of short words as values in word-level identification. For longer words, he summed up the frequencies of different-sized found in the word to be identified. BIBREF210 first calculated the language corresponding to each graphone. They then summed up the predicted languages, and the language scoring the highest was the winner. When a tie occurred, they used the product of the Markovian graphone . Their method managed to outperform SVMs in their tests.", " BIBREF46 used the average of all the relative frequencies of the in the text to be identified. BIBREF312 evaluated several variations of the LIGA algorithm introduced by BIBREF313 . BIBREF308 and BIBREF148 also used LIGA and logLIGA methods. The average or sum of relative frequencies was also used recently by BIBREF85 and BIBREF108 .", " BIBREF57 summed up LFDF values (see characters), obtaining 99.75% accuracy when classifying document sized texts between four languages using Arabic script. BIBREF110 calculates the score of the language for the test document INLINEFORM0 as the average of the probability estimates of the features, as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the number of features in the test document INLINEFORM1 . BIBREF153 summed weighted relative frequencies of character , and normalized the score by dividing by the length (in characters) of the test document. Taking the average of the terms in the sums does not change the order of the scored languages, but it gives comparable results between different lengths of test documents.", " BIBREF92 , BIBREF314 summed up the feature weights and divided them by the number of words in the test document in order to set a threshold to detect unknown languages. Their language identifier obtained 89% precision and 94% recall when classifying documents between five languages. BIBREF192 used a weighting method combining alphabets, prefixes, suffixes and words. BIBREF233 summed up values from a word trigram ranking, basic dictionary and morphological analyzer lookup. BIBREF282 summed up language labels of the surrounding words to identify the language of the current word. BIBREF200 summed up points awarded by the presence of morphological and syntactic features. BIBREF102 used inverse rank positions as values. BIBREF158 computed the sum of keywords weighted with TF-IDF. BIBREF315 summed up the TF-IDF derived probabilities of words." ], [ "The product of values can be expressed as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the INLINEFORM1 th feature found in test document INLINEFORM2 , and INLINEFORM3 is the value for the feature in the language model of language INLINEFORM4 . The language with the highest score is the winner. Some form of feature smoothing is usually required with the product of values method to avoid multiplying by zero.", " BIBREF26 was the first to use the product of relative frequencies and it has been widely used ever since; recent examples include BIBREF86 , BIBREF87 , BIBREF161 , and BIBREF148 . Some of the authors use a sum of log frequencies rather than a product of frequencies to avoid underflow issues over large numbers of features, but the two methods yield the same relative ordering, with the proviso that the maximum of multiplying numbers between 0 and 1 becomes the minimum of summing their negative logarithms, as can be inferred from: DISPLAYFORM0 ", "When (multinomial) NB is used in , each feature used has a probability to indicate each language. The probabilities of all features found in the test document are multiplied for each language, and the language with the highest probability is selected, as in productvalues1. Theoretically the features are assumed to be independent of each other, but in practice using features that are functionally dependent can improve classification accuracy BIBREF316 .", "NB implementations have been widely used for , usually with a more varied set of features than simple character or word of the same type and length. The features are typically represented as feature vectors given to a NB classifier. BIBREF283 trained a NB classifier with language labels of surrounding words to help predict the language of ambiguous words first identified using an SVM. The language identifier used by BIBREF77 obtained 99.97% accuracy with 5-grams of characters when classifying sentence-sized texts between six language groups. BIBREF265 used a probabilistic model similar to NB. BIBREF252 used NB and naive Bayes EM, which uses the Expectation–Maximization (“EM”) algorithm in a semi-supervised setting to improve accuracy. BIBREF4 used Gaussian naive Bayes (“GNB”, i.e. NB with Gaussian estimation over continuous variables) from scikit-learn.", "In contrast to NB, in Bayesian networks the features are not assumed to be independent of each other. The network learns the dependencies between features in a training phase. BIBREF315 used a Bayesian Net classifier in two-staged (group first) over the open track of the DSL 2015 shared task. BIBREF130 similarly evaluated Bayesian Nets, but found them to perform worse than the other 11 algorithms they tested.", " BIBREF25 used the product of the Markovian probabilities of character bigrams. The language identifier created by BIBREF153 , BIBREF101 , “whatlang”, obtains 99.2% classification accuracy with smoothing for 65 character test strings, when distinguishing between 1,100 languages. The product of Markovian probabilities has recently also been used by BIBREF109 and BIBREF260 .", " BIBREF170 use a word-based backoff method called HeLI. Here, each language is represented by several different language models, only one of which is used for each word found in the test document. The language models for each language are: a word-level language model, and one or more models based on character of order 1– INLINEFORM0 . When a word that is not included in the word-level model is encountered in a test document, the method backs off to using character of the size INLINEFORM1 . If there is not even a partial coverage here, the method backs off to lower order and continues backing off until at least a partial coverage is obtained (potentially all the way to character unigrams). The system of BIBREF170 implementing the HeLI method attained shared first place in the closed track of the DSL 2016 shared task BIBREF317 , and was the best method tested by BIBREF148 for test documents longer than 30 characters." ], [ "The well-known method of BIBREF7 uses overlapping character of varying sizes based on words. The language models are created by tokenizing the training texts for each language INLINEFORM0 into words, and then padding each word with spaces, one before and four after. Each padded word is then divided into overlapping character of sizes 1–5, and the counts of every unique are calculated over the training corpus. The are ordered by frequency and INLINEFORM1 of the most frequent , INLINEFORM2 , are used as the domain of the language model INLINEFORM3 for the language INLINEFORM4 . The rank of an INLINEFORM5 in language INLINEFORM6 is determined by the frequency in the training corpus INLINEFORM7 and denoted INLINEFORM8 .", "During , the test document INLINEFORM0 is treated in a similar way and a corresponding model INLINEFORM1 of the K most frequent is created. Then a distance score is calculated between the model of the test document and each of the language models. The value INLINEFORM2 is calculated as the difference in ranks between INLINEFORM3 and INLINEFORM4 of the INLINEFORM5 in the domain INLINEFORM6 of the model of the test document. If an is not found in a language model, a special penalty value INLINEFORM7 is added to the total score of the language for each missing . The penalty value should be higher than the maximum possible distance between ranks. DISPLAYFORM0 ", "The score INLINEFORM0 for each language INLINEFORM1 is the sum of values, as in sumvalues1. The language with the lowest score INLINEFORM2 is selected as the identified language. The method is equivalent to Spearman's measure of disarray BIBREF318 . The out-of-place method has been widely used in literature as a baseline. In the evaluations of BIBREF148 for 285 languages, the out-of-place method achieved an F-score of 95% for 35-character test documents. It was the fourth best of the seven evaluated methods for test document lengths over 20 characters.", "Local Rank Distance BIBREF319 is a measure of difference between two strings. LRD is calculated by adding together the distances identical units (for example character ) are from each other between the two strings. The distance is only calculated within a local window of predetermined length. BIBREF122 and BIBREF320 used LRD with a Radial Basis Function (“RBF”) kernel (see RBF). For learning they experimented with both Kernel Discriminant Analysis (“KDA”) and Kernel Ridge Regression (“KRR”). BIBREF248 also used KDA.", " BIBREF224 calculated the Levenshtein distance between the language models and each word in the mystery text. The similary score for each language was the inverse of the sum of the Levenshtein distances. Their language identifier obtained 97.7% precision when classifying texts from two to four words between five languages. Later BIBREF216 used Levenshtein distance for Algerian dialect identification and BIBREF305 for query word identification.", " BIBREF321 , BIBREF322 , BIBREF323 , and BIBREF324 calculated the difference between probabilities as in Equation EQREF109 . DISPLAYFORM0 ", "", "where INLINEFORM0 is the probability for the feature INLINEFORM1 in the mystery text and INLINEFORM2 the corresponding probability in the language model of the language INLINEFORM3 . The language with the lowest score INLINEFORM4 is selected as the most likely language for the mystery text. BIBREF239 , BIBREF262 used the log probability difference and the absolute log probability difference. The log probability difference proved slightly better, obtaining a precision of 94.31% using both character and word when classifying 100 character texts between 53 language-encoding pairs.", "Depending on the algorithm, it can be easier to view language models as vectors of weights over the target features. In the following methods, each language is represented by one or more feature vectors. Methods where each feature type is represented by only one feature vector are also sometimes referred to as centroid-based BIBREF58 or nearest prototype methods. Distance measures are generally applied to all features included in the feature vectors.", " BIBREF31 calculated the squared Euclidean distance between feature vectors. The Squared Euclidean distance can be calculated as: DISPLAYFORM0 ", " BIBREF93 used the simQ similarity measure, which is closely related to the Squared Euclidean distance.", " BIBREF155 investigated the of multilingual documents using a Stochastic Learning Weak Estimator (“SLWE”) method. In SLWE, the document is processed one word at a time and the language of each word is identified using a feature vector representing the current word as well as the words processed so far. This feature vector includes all possible units from the language models – in their case mixed-order character from one to four. The vector is updated using the SLWE updating scheme to increase the probabilities of units found in the current word. The probabilities of units that have been found in previous words, but not in the current one, are on the other hand decreased. After processing each word, the distance of the feature vector to the probability distribution of each language is calculated, and the best-matching language is chosen as the language of the current word. Their language identifier obtained 96.0% accuracy when classifying sentences with ten words between three languages. They used the Euclidean distance as the distance measure as follows: DISPLAYFORM0 ", " BIBREF325 compared the use of Euclidean distance with their own similarity functions. BIBREF112 calculated the cosine angle between the feature vector of the test document and the feature vectors acting as language models. This is also called the cosine similarity and is calculated as follows: DISPLAYFORM0 ", "The method of BIBREF112 was evaluated by BIBREF326 in the context of over multilingual documents. The cosine similarity was used recently by BIBREF131 . One common trick with cosine similarity is to pre-normalise the feature vectors to unit length (e.g. BIBREF36 ), in which case the calculation takes the form of the simple dot product: DISPLAYFORM0 ", " BIBREF60 used chi-squared distance, calculated as follows: DISPLAYFORM0 ", " BIBREF85 compared Manhattan, Bhattacharyya, chi-squared, Canberra, Bray Curtis, histogram intersection, correlation distances, and out-of-place distances, and found the out-of-place method to be the most accurate.", " BIBREF239 , BIBREF262 used cross-entropy and symmetric cross-entropy. Cross-entropy is calculated as follows, where INLINEFORM0 and INLINEFORM1 are the probabilities of the feature INLINEFORM2 in the the test document INLINEFORM3 and the corpus INLINEFORM4 : DISPLAYFORM0 ", "Symmetric cross-entropy is calculated as: DISPLAYFORM0 ", "For cross-entropy, distribution INLINEFORM0 must be smoothed, and for symmetric cross-entropy, both probability distributions must be smoothed. Cross-entropy was used recently by BIBREF161 . BIBREF301 used a cross-entropy estimating method they call the Mean of Matching Statistics (“MMS”). In MMS every possible suffix of the mystery text INLINEFORM1 is compared to the language model of each language and the average of the lengths of the longest possible units in the language model matching the beginning of each suffix is calculated.", " BIBREF327 and BIBREF32 calculated the relative entropy between the language models and the test document, as follows: DISPLAYFORM0 ", "This method is also commonly referred to as Kullback-Leibler (“KL”) distance or skew divergence. BIBREF60 compared relative entropy with the product of the relative frequencies for different-sized character , and found that relative entropy was only competitive when used with character bigrams. The product of relative frequencies gained clearly higher recall with higher-order when compared with relative entropy.", " BIBREF239 , BIBREF262 also used the RE and MRE measures, which are based on relative entropy. The RE measure is calculated as follows: DISPLAYFORM0 ", "MRE is the symmetric version of the same measure. In the tests performed by BIBREF239 , BIBREF262 , the RE measure with character outperformed other tested methods obtaining 98.51% precision when classifying 100 character texts between 53 language-encoding pairs.", " BIBREF304 used a logistic regression (“LR”) model (also commonly referred to as “maximum entropy” within NLP), smoothed with a Gaussian prior. BIBREF328 defined LR for character-based features as follows: DISPLAYFORM0 ", "where INLINEFORM0 is a normalization factor and INLINEFORM1 is the word count in the word-tokenized test document. BIBREF158 used an LR classifier and found it to be considerably faster than an SVM, with comparable results. Their LR classifier ranked 6 out of 9 on the closed submission track of the DSL 2015 shared task. BIBREF199 used Adaptive Logistic Regression, which automatically optimizes parameters. In recent years LR has been widely used for .", " BIBREF95 was the first to use perplexity for , in the manner of a language model. He calculated the perplexity for the test document INLINEFORM0 as follows: DISPLAYFORM0 DISPLAYFORM1 ", "where INLINEFORM0 were the Katz smoothed relative frequencies of word n-grams INLINEFORM1 of the length INLINEFORM2 . BIBREF146 and BIBREF148 evaluated the best performing method used by BIBREF145 . Character n-gram based perplexity was the best method for extremely short texts in the evaluations of BIBREF148 , but for longer sequences the methods of BIBREF36 and BIBREF60 proved to be better. Lately, BIBREF182 also used perplexity.", " BIBREF20 used Yule's characteristic K and the Kolmogorov-Smirnov goodness of fit test to categorize languages. Kolmogorov-Smirnov proved to be the better of the two, obtaining 89% recall for 53 characters (one punch card) of text when choosing between two languages. In the goodness of fit test, the ranks of features in the models of the languages and the test document are compared. BIBREF329 experimented with Jiang and Conrath's (JC) distance BIBREF330 and Lin's similarity measure BIBREF331 , as well as the out-of-place method. They conclude that Lin's similarity measure was consistently the most accurate of the three. JC-distance measure was later evaluated by BIBREF239 , BIBREF262 , and was outperformed by the RE measure. BIBREF39 and BIBREF332 calculated special ratios from the number of trigrams in the language models when compared with the text to be identified. BIBREF333 , BIBREF334 , BIBREF335 used the quadratic discrimination score to create the feature vectors representing the languages and the test document. They then calculated the Mahalanobis distance between the languages and the test document. Their language identifier obtained 98.9% precision when classifying texts of four “screen lines” between 19 languages. BIBREF336 used odds ratio to identify the language of parts of words when identifying between two languages. Odds ratio for language INLINEFORM0 when compared with language INLINEFORM1 for morph INLINEFORM2 is calculated as in Equation EQREF127 . DISPLAYFORM0 " ], [ "The differences between languages can be stored in discriminant functions. The functions are then used to map the test document into an INLINEFORM0 -dimensional space. The distance of the test document to the languages known by the language identifier is calculated, and the nearest language is selected (in the manner of a nearest prototype classifier).", " BIBREF114 used multiple linear regression to calculate discriminant functions for two-way for Indian languages. BIBREF337 compared linear regression, NB, and LR. The precision for the three methods was very similar, with linear regression coming second in terms of precision after LR.", "Multiple discriminant analysis was used for by BIBREF18 . He used two functions, the first separated Finnish from English and Swedish, and the second separated English and Swedish from each other. He used Mahalanobis' INLINEFORM0 as a distance measure. BIBREF113 used Multivariate Analysis (“MVA”) with Principal Component Analysis (“PCA”) for dimensionality reduction and . BIBREF59 compared discriminant analysis with SVM and NN using characters as features, and concluded that the SVM was the best method.", " BIBREF40 experimented with the Winnow 2 algorithm BIBREF338 , but the method was outperformed by other methods they tested." ], [ "With support vector machines (“SVMs”), a binary classifier is learned by learning a separating hyperplane between the two classes of instances which maximizes the margin between them. The simplest way to extend the basic SVM model into a multiclass classifier is via a suite of one-vs-rest classifiers, where the classifier with the highest score determines the language of the test document. One feature of SVMs that has made them particularly popular is their compatibility with kernels, whereby the separating hyperplane can be calculated via a non-linear projection of the original instance space. In the following paragraphs, we list the different kernels that have been used with SVMs for .", "For with SVMs, the predominant approach has been a simple linear kernel SVM model. The linear kernel model has a weight vector INLINEFORM0 and the classification of a feature vector INLINEFORM1 , representing the test document INLINEFORM2 , is calculated as follows: DISPLAYFORM0 ", "where INLINEFORM0 is a scalar bias term. If INLINEFORM1 is equal to or greater than zero, INLINEFORM2 is categorized as INLINEFORM3 .", "The first to use a linear kernel SVM were BIBREF339 , and generally speaking, linear-kernel SVMs have been widely used for , with great success across a range of shared tasks.", " BIBREF100 were the first to apply polynomial kernel SVMs to . With a polynomial kernel INLINEFORM0 can be calculated as: DISPLAYFORM0 ", "where INLINEFORM0 is the polynomial degree, and a hyperparameter of the model.", "Another popular kernel is the RBF function, also known as a Gaussian or squared exponential kernel. With an RBF kernel INLINEFORM0 is calculated as: DISPLAYFORM0 ", "where INLINEFORM0 is a hyperparameter. BIBREF321 were the first to use an RBF kernel SVM for .", "With sigmoid kernel SVMs, also known as hyperbolic tangent SVMs, INLINEFORM0 can be calculated as: DISPLAYFORM0 ", " BIBREF340 were the first to use a sigmoid kernel SVM for , followed by BIBREF341 , who found the SVM to perform better than NB, Classification And Regression Tree (“CART”), or the sum of relative frequencies.", "Other kernels that have been used with SVMs for include exponential kernels BIBREF178 and rational kernels BIBREF342 . BIBREF31 were the first to use SVMs for , in the form of string kernels using Ukkonen's algorithm. They used same string kernels with Euclidean distance, which did not perform as well as SVM. BIBREF87 compared SVMs with linear and on-line passive–aggressive kernels for , and found passive–aggressive kernels to perform better, but both SVMs to be inferior to NB and Log-Likelihood Ratio (sum of log-probabilities). BIBREF339 experimented with the Sequential Minimal Optimization (“SMO”) algorithm, but found a simple linear kernel SVM to perform better. BIBREF118 achieved the best results using the SMO algorithm, whereas BIBREF123 found CRFs to work better than SMO. BIBREF178 found that SMO was better than linear, exponential and polynomial kernel SVMs for Arabic tweet gender and dialect prediction.", "MultipleKernelSVMarticlesTable lists articles where SVMs with different kernels have been compared. BIBREF343 evaluated three different SVM approaches using datasets from different DSL shared tasks. SVM-based approaches were the top performing systems in the 2014 and 2015 shared tasks.", " BIBREF277 used SVMs with the Margin Infused Relaxed Algorithm, which is an incremental version of SVM training. In their evaluation, this method achieved better results than off-the-shelf ." ], [ " BIBREF344 was the first to use Neural Networks (“NN”) for , in the form of a simple BackPropagation Neural Network (“BPNN”) BIBREF345 with a single layer of hidden units, which is also called a multi-layer perceptron (“MLP”) model. She used words as the input features for the neural network. BIBREF346 and BIBREF347 succesfully applied MLP to .", " BIBREF348 , BIBREF349 and BIBREF350 used radial basis function (RBF) networks for . BIBREF351 were the first to use adaptive resonance learning (“ART”) neural networks for . BIBREF85 used Neural Text Categorizer (“NTC”: BIBREF352 ) as a baseline. NTC is an MLP-like NN using string vectors instead of number vectors.", " BIBREF111 were the first to use a RNN for . They concluded that RNNs are less accurate than the simple sum of logarithms of counts of character bi- or trigrams, possibly due to the relatively modestly-sized dataset they experimented with. BIBREF221 compared NNs with the out-of-place method (see sec. UID104 ). Their results show that the latter, used with bigrams and trigrams of characters, obtains clearly higher identification accuracy when dealing with test documents shorter than 400 characters.", "RNNs were more successfully used later by BIBREF245 who also incorporated character n-gram features in to the network architecture. BIBREF223 were the first to use a Long Short-Term Memory (“LSTM”) for BIBREF353 , and BIBREF354 was the first to use Gated Recurrent Unit networks (“GRUs”), both of which are RNN variants. BIBREF354 used byte-level representations of sentences as input for the networks. Recently, BIBREF89 and BIBREF176 also used LSTMs. Later, GRUs were successfully used for by BIBREF355 and BIBREF356 . In addition to GRUs, BIBREF354 also experimented with deep residual networks (“ResNets”) at DSL 2016.", "During 2016 and 2017, there was a spike in the use of convolutional neural networks (CNNs) for , most successfully by BIBREF302 and BIBREF357 . Recently, BIBREF358 combined a CNN with adversarial learning to better generalize to unseen domains, surpassing the results of BIBREF151 based on the same training regime as .", " BIBREF275 used CBOW NN, achieving better results over the development set of DSL 2017 than RNN-based neural networks. BIBREF62 used deep averaging networks (DANs) based on word embeddings in language variety identification." ], [ " BIBREF45 used the decision table majority classifier algorithm from the WEKA toolkit in English variety detection. The bagging algorithm using DTs was the best method they tested (73.86% accuracy), followed closely by the decision table with 73.07% accuracy.", " BIBREF359 were the first to apply hidden Markov models (HMM) to . More recently HMMs have been used by BIBREF214 , BIBREF288 , and BIBREF261 . BIBREF360 generated aggregate Markov models, which resulted in the best results when distinguishing between six languages, obtaining 74% accuracy with text length of ten characters. BIBREF156 used an extended Markov Model (“eMM”), which is essentially a standard HMM with modified emission probabilities. Their eMM used manually optimized weights to combine four scores (products of relative frequencies) into one score. BIBREF361 used Markov logic networks BIBREF362 to predict the language used in interlinear glossed text examples contained in linguistic papers.", " BIBREF363 evaluated the use of unsupervised Fuzzy C Means algorithm (“FCM”) in language identification. The unsupervised algorithm was used on the training data to create document clusters. Each cluster was tagged with the language having the most documents in the cluster. Then in the identification phase, the mystery text was mapped to the closest cluster and identified with its language. A supervised centroid classifier based on cosine similarity obtained clearly better results in their experiments (93% vs. 77% accuracy).", " BIBREF119 and BIBREF67 evaluated the extreme gradient boosting (“XGBoost”) method BIBREF364 . BIBREF119 found that gradient boosting gave better results than RFs, while conversely, BIBREF67 found that LR gave better results than gradient boosting.", " BIBREF365 used compression methods for , whereby a single test document is added to the training text of each language in turn, and the language with the smallest difference (after compression) between the sizes of the original training text file and the combined training and test document files is selected as the prediction. This has obvious disadvantages in terms of real-time computational cost for prediction, but is closely related to language modeling approaches to (with the obvious difference that the language model doesn't need to be retrained multiply for each test document). In terms of compression methods, BIBREF366 experimented with Maximal Tree Machines (“MTMs”), and BIBREF367 used LZW-based compression.", "Very popular in text categorization and topic modeling, BIBREF368 , BIBREF23 , and BIBREF24 used Latent Dirichlet Allocation (“LDA”: BIBREF369 ) based features in classifying tweets between Arabic dialects, English, and French. Each tweet was assigned with an LDA topic, which was used as one of the features of an LR classifier.", " BIBREF249 used a Gaussian Process classifier with an RBF kernel in an ensemble with an LR classifier. Their ensemble achieved only ninth place in the “PAN” (Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection workshop) Author Profiling language variety shared task BIBREF370 and did not reach the results of the baseline for the task.", " BIBREF181 , BIBREF188 used a Passive Aggressive classifier, which proved to be almost as good as the SVMs in their evaluations between five different machine learning algorithms from the same package." ], [ "Ensemble methods are meta-classification methods capable of combining several base classifiers into a combined model via a “meta-classifier” over the outputs of the base classifiers, either explicitly trained or some heuristic. It is a simple and effective approach that is used widely in machine learning to boost results beyond those of the individual base classifiers, and particularly effective when applied to large numbers of individually uncorrelated base classifiers.", " BIBREF20 used simple majority voting to combine classifiers using different features and methods. In majority voting, the language of the test document is identified if a majority ( INLINEFORM0 ) of the classifiers in the ensemble vote for the same language. In plurality voting, the language with most votes is chosen as in the simple scoring method (simple1). Some authors also refer to plurality voting as majority voting.", " BIBREF371 used majority voting in tweet . BIBREF210 used majority voting with JSM classifiers. BIBREF265 and BIBREF269 used majority voting between SVM classifiers trained with different features. BIBREF266 used majority voting to combine four classifiers: RF, random tree, SVM, and DT. BIBREF372 and BIBREF152 used majority voting between three off-the-shelf language identifiers. BIBREF104 used majority voting between perplexity-based and other classifiers. BIBREF141 used majority voting between three sum of relative frequencies-based classifiers where values were weighted with different weighting schemes. BIBREF270 , BIBREF125 , BIBREF171 , BIBREF185 , BIBREF172 , and BIBREF260 used plurality voting with SVMs.", " BIBREF182 used voting between several perplexity-based classifiers with different features at the 2017 DSL shared task. A voting ensemble gave better results on the closed track than a singular word-based perplexity classifier (0.9025 weighted F1-score over 0.9013), but worse results on the open track (0.9016 with ensemble and 0.9065 without).", "In a highest probability ensemble, the winner is simply the language which is given the highest probability by any of the individual classifiers in the ensemble. BIBREF96 used Gaussian Mixture Models (“GMM”) to give probabilities to the outputs of classifiers using different features. BIBREF372 used higher confidence between two off-the-shelf language identifiers. BIBREF265 used GMM to transform SVM prediction scores into probabilities. BIBREF270 , BIBREF125 used highest confidence over a range of base SVMs. BIBREF125 used an ensemble composed of low-dimension hash-based classifiers. According to their experiments, hashing provided up to 86% dimensionality reduction without negatively affecting performance. Their probability-based ensemble obtained 89.2% accuracy, while the voting ensemble got 88.7%. BIBREF166 combined an SVM and a LR classifier.", "A mean probability ensemble can be used to combine classifiers that produce probabilities (or other mutually comparable values) for languages. The average of values for each language over the classifier results is used to determine the winner and the results are equal to the sum of values method (sumvalues1). BIBREF270 evaluated several ensemble methods and found that the mean probability ensemble attained better results than plurality voting, median probability, product, highest confidence, or Borda count ensembles.", "In a median probability ensemble, the medians over the probabilities given by the individual classifiers are calculated for each language. BIBREF270 and BIBREF171 used a median probability rule ensemble over SVM classifiers. Consistent with the results of BIBREF270 , BIBREF171 found that a mean ensemble was better than a median ensemble, attaining 68% accuracy vs. 67% for the median ensemble.", "A product rule ensemble takes the probabilities for the base classifiers and calculates their product (or sum of the log probabilities), with the effect of penalising any language where there is a particularly low probability from any of the base classifiers. BIBREF210 used log probability voting with JSM classifiers. BIBREF210 observed a small increase in average accuracy using the product ensemble over a majority voting ensemble.", "In a INLINEFORM0 -best ensemble, several models are created for each language INLINEFORM1 by partitioning the corpus INLINEFORM2 into separate samples. The score INLINEFORM3 is calculated for each model. For each language, plurality voting is then applied to the INLINEFORM4 models with the best scores to predict the language of the test document INLINEFORM5 . BIBREF349 evaluated INLINEFORM6 -best with INLINEFORM7 based on several similarity measures. BIBREF54 compared INLINEFORM8 and INLINEFORM9 and concluded that there was no major difference in accuracy when distinguishing between six languages (100 character test set). BIBREF373 experimented with INLINEFORM10 -best classifiers, but they gave clearly worse results than the other classifiers they evaluated. BIBREF212 used INLINEFORM11 -best in two phases, first selecting INLINEFORM12 closest neighbors with simple similarity, and then using INLINEFORM13 with a more advanced similarity ranking.", "In bagging, independent samples of the training data are generated by random sampling with replacement, individual classifiers are trained over each such training data sample, and the final classification is determined by plurality voting. BIBREF67 evaluated the use of bagging with an LR classifier in PAN 2017 language variety identification shared task, however, bagging did not improve the accuracy in the 10-fold cross-validation experiments on the training set. BIBREF374 used bagging with word convolutional neural networks (“W-CNN”). BIBREF45 used bagging with DTs in English national variety detection and found DT-based bagging to be the best evaluated method when all 60 different features (a wide selection of formal, POS, lexicon-based, and data-based features) were used, attaining 73.86% accuracy. BIBREF45 continued the experiments using the ReliefF feature selection algorithm from the WEKA toolkit to select the most efficient features, and achieved 77.32% accuracy over the reduced feature set using a NB classifier.", " BIBREF130 evaluated the Rotation Forest meta classifier for DTs. The method randomly splits the used features into a pre-determined number of subsets and then uses PCA for each subset. It obtained 66.6% accuracy, attaining fifth place among the twelve methods evaluated.", "The AdaBoost algorithm BIBREF375 examines the performance of the base classifiers on the evaluation set and iteratively boosts the significance of misclassified training instances, with a restart mechanism to avoid local minima. AdaBoost was the best of the five machine learning techniques evaluated by BIBREF53 , faring better than C4.5, NB, RF, and linear SVM. BIBREF130 used the LogitBoost variation of AdaBoost. It obtained 67.0% accuracy, attaining third place among the twelve methods evaluated.", "In stacking, a higher level classifier is explicitly trained on the output of several base classifiers. BIBREF96 used AdaBoost.ECC and CART to combine classifiers using different features. More recently, BIBREF127 used LR to combine the results of five RNNs. As an ensemble they produced better results than NB and LR, which were better than the individual RNNs. Also in 2017, BIBREF185 , BIBREF172 used RF to combine several linear SVMs with different features. The system used by BIBREF172 ranked first in the German dialect identification shared task, and the system by BIBREF185 came second (71.65% accuracy) in the Arabic dialect identification shared task." ], [ "In the previous two sections, we have alluded to issues of evaluation in research to date. In this section, we examine the literature more closely, providing a broad overview of the evaluation metrics that have been used, as well as the experimental settings in which research has been evaluated." ], [ "The most common approach is to treat the task as a document-level classification problem. Given a set of evaluation documents, each having a known correct label from a closed set of labels (often referred to as the “gold-standard”), and a predicted label for each document from the same set, the document-level accuracy is the proportion of documents that are correctly labeled over the entire evaluation collection. This is the most frequently reported metric and conveys the same information as the error rate, which is simply the proportion of documents that are incorrectly labeled (i.e. INLINEFORM0 ).", "Authors sometimes provide a per-language breakdown of results. There are two distinct ways in which results are generally summarized per-language: (1) precision, in which documents are grouped according to their predicted language; and (2) recall, in which documents are grouped according to what language they are actually written in. Earlier work has tended to only provide a breakdown based on the correct label (i.e. only reporting per-language recall). This gives us a sense of how likely a document in any given language is to be classified correctly, but does not give an indication of how likely a prediction for a given language is of being correct. Under the monolingual assumption (i.e. each document is written in exactly one language), this is not too much of a problem, as a false negative for one language must also be a false positive for another language, so precision and recall are closely linked. Nonetheless, authors have recently tended to explicitly provide both precision and recall for clarity. It is also common practice to report an F-score INLINEFORM0 , which is the harmonic mean of precision and recall. The F-score (also sometimes called F1-score or F-measure) was developed in IR to measure the effectiveness of retrieval with respect to a user who attaches different relative importance to precision and recall BIBREF376 . When used as an evaluation metric for classification tasks, it is common to place equal weight on precision and recall (hence “F1”-score, in reference to the INLINEFORM1 hyper-parameter, which equally weights precision and recall when INLINEFORM2 ).", "In addition to evaluating performance for each individual language, authors have also sought to convey the relationship between classification errors and specific sets of languages. Errors in systems are generally not random; rather, certain sets of languages are much more likely to be confused. The typical method of conveying this information is through the use of a confusion matrix, a tabulation of the distribution of (predicted language, actual language) pairs.", "Presenting full confusion matrices becomes problematic as the number of languages considered increases, and as a result has become relatively uncommon in work that covers a broader range of languages. Per-language results are also harder to interpret as the number of languages increases, and so it is common to present only collection-level summary statistics. There are two conventional methods for summarizing across a whole collection: (1) giving each document equal weight; and (2) giving each class (i.e. language) equal weight. (1) is referred to as a micro-average, and (2) as a macro-average. For under the monolingual assumption, micro-averaged precision and recall are the same, since each instance of a false positive for one language must also be a false negative for another language. In other words, micro-averaged precision and recall are both simply the collection-level accuracy. On the other hand, macro-averaged precision and recall give equal weight to each language. In datasets where the number of documents per language is the same, this again works out to being the collection-level average. However, research has frequently dealt with datasets where there is a substantial skew between classes. In such cases, the collection-level accuracy is strongly biased towards more heavily-represented languages. To address this issue, in work on skewed document collections, authors tend to report both the collection-level accuracy and the macro-averaged precision/recall/F-score, in order to give a more complete picture of the characteristics of the method being studied.", "Whereas the notions of macro-averaged precision and recall are clearly defined, there are two possible methods to calculate the macro-averaged F-score. The first is to calculate it as the harmonic mean of the macro-averaged precision and recall, and the second is to calculate it as the arithmetic mean of the per-class F-score.", "The comparability of published results is also limited by the variation in size and source of the data used for evaluation. In work to date, authors have used data from a variety of different sources to evaluate the performance of proposed solutions. Typically, data for a number of languages is collected from a single source, and the number of languages considered varies widely. Earlier work tended to focus on a smaller number of Western European languages. Later work has shifted focus to supporting larger numbers of languages simultaneously, with the work of BIBREF101 pushing the upper bound, reporting a language identifier that supports over 1300 languages. The increased size of the language set considered is partly due to the increased availability of language-labeled documents from novel sources such as Wikipedia and Twitter. This supplements existing data from translations of the Universal Declaration of Human Rights, bible translations, as well as parallel texts from MT datasets such as OPUS and SETimes, and European Government data such as JRC-Acquis. These factors have led to a shift away from proprietary datasets such as the ECI multilingual corpus that were commonly used in earlier research. As more languages are considered simultaneously, the accuracy of systems decreases. A particularly striking illustration of this is the evaluation results by BIBREF148 for the logLIGA method BIBREF312 . BIBREF312 report an accuracy of 99.8% over tweets (averaging 80 characters) in six European languages as opposed to the 97.9% from the original LIGA method. The LIGA and logLIGA implementations by BIBREF148 have comparable accuracy for six languages, but the accuracy for 285 languages (with 70 character test length) is only slightly over 60% for logLIGA and the original LIGA method is at almost 85%. Many evaluations are not directly comparable as the test sizes, language sets, and hyper-parameters differ. A particularly good example is the method of BIBREF7 . The original paper reports an accuracy of 99.8% over eight European languages (>300 bytes test size). BIBREF150 report an accuracy of 68.6% for the method over a dataset of 67 languages (500 byte test size), and BIBREF148 report an accuracy of over 90% for 285 languages (25 character test size).", "Separate to the question of the number and variety of languages included are issues regarding the quantity of training data used. A number of studies have examined the relationship between accuracy and quantity of training data through the use of learning curves. The general finding is that accuracy increases with more training data, though there are some authors that report an optimal amount of training data, where adding more training data decreases accuracy thereafter BIBREF377 . Overall, it is not clear whether there is a universal quantity of data that is “enough” for any language, rather this amount appears to be affected by the particular set of languages as well as the domain of the data. The breakdown presented by BIBREF32 shows that with less than 100KB per language, there are some languages where classification accuracy is near perfect, whereas there are others where it is very poor.", "Another aspect that is frequently reported on is how long a sample of text needs to be before its language can be correctly detected. Unsurprisingly, the general consensus is that longer samples are easier to classify correctly. There is a strong interest in classifying short segments of text, as certain applications naturally involve short text documents, such as of microblog messages or search engine queries. Another area where of texts as short as one word has been investigated is in the context of dealing with documents that contain text in more than one language, where word-level has been proposed as a possible solution (see openissues:multilingual). These outstanding challenges have led to research focused specifically on of shorter segments of text, which we discuss in more detail in openissues:short.", "From a practical perspective, knowing the rate at which a system can process and classify documents is useful as it allows a practitioner to predict the time required to process a document collection given certain computational resources. However, so many factors influence the rate at which documents are processed that comparison of absolute values across publications is largely meaningless. Instead, it is more valuable to consider publications that compare multiple systems under controlled conditions (same computer hardware, same evaluation data, etc.). The most common observations are that classification times between different algorithms can differ by orders of magnitude, and that the fastest methods are not always the most accurate. Beyond that, the diversity of systems tested and the variety in the test data make it difficult to draw further conclusions about the relative speed of algorithms.", "Where explicit feature selection is used, the number of features retained is a parameter of interest, as it affects both the memory requirements of the system and its classification rate. In general, a smaller feature set results in a faster and more lightweight identifier. Relatively few authors give specific details of the relationship between the number of features selected and accuracy. A potential reason for this is that the improvement in accuracy plateaus with increasing feature count, though the exact number of features required varies substantially with the method and the data used. At the lower end of the scale, BIBREF7 report that 300–400 features per language is sufficient. Conversely BIBREF148 found that, for the same method, the best results for the evaluation set were attained with 20,000 features per language." ], [ "As discussed in standardevaluation, the objective comparison of different methods for is difficult due to the variation in the data that different authors have used to evaluate methods. BIBREF32 emphasize this by demonstrating how the performance of a system can vary according to the data used for evaluation. This implies that comparisons of results reported by different authors may not be meaningful, as a strong result in one paper may not translate into a strong result on the dataset used in a different paper. In other areas of research, authors have proposed standardized corpora to allow for the objective comparison of different methods.", "Some authors have released datasets to accompany their work, to allow for direct replication of their experiments and encourage comparison and standardization. datasets lists a number of datasets that have been released to accompany specific publications. In this list, we only include corpora that were prepared specifically for research, and that include the full text of documents. Corpora of language-labelled Twitter messages that only provide document identifiers are also available, but reproducing the full original corpus is always an issue as the original Twitter messages are deleted or otherwise made unavailable.", "One challenge in standardizing datasets for is that the codes used to label languages are not fully standardized, and a large proportion of labeling systems only cover a minor portion of the languages used in the world today BIBREF381 . BIBREF382 discuss this problem in detail, listing different language code sets, as well as the internal structure exhibited by some of the code sets. Some standards consider certain groups of “languages” as varieties of a single macro-language, whereas others consider them to be discrete languages. An example of this is found in South Slavic languages, where some language code sets refer to Serbo-Croatian, whereas others make distinctions between Bosnian, Serbian and Croatian BIBREF98 . The unclear boundaries between such languages make it difficult to build a reference corpus of documents for each language, or to compare language-specific results across datasets.", "Another challenge in standardizing datasets for is the great deal of variation that can exist between data in the same language. We examine this in greater detail in openissues:encoding, where we discuss how the same language can use a number of different orthographies, can be digitized using a number of different encodings, and may also exist in transliterated forms. The issue of variation within a language complicates the development of standardized datasets, due to challenges in determining which variants of a language should be included. Since we have seen that the performance of systems can vary per-domain BIBREF32 , that research is often motivated by target applications (see applications), and that domain-specific information can be used to improve accuracy (see openissues:domainspecific), it is often unsound to use a generic dataset to develop a language identifier for a particular domain.", "A third challenge in standardizing datasets for is the cost of obtaining correctly-labeled data. Manual labeling of data is usually prohibitively expensive, as it requires access to native speakers of all languages that the dataset aims to include. Large quantities of raw text data are available from sources such as web crawls or Wikipedia, but this data is frequently mislabeled (e.g. most non-English Wikipedias still include some English-language documents). In constructing corpora from such resources, it is common to use some form of automatic , but this makes such corpora unsuitable for evaluation purposes as they are biased towards documents that can be correctly identified by automatic systems BIBREF152 . Future work in this area could investigate other means of ensuring correct gold-standard labels while minimizing the annotation cost.", "Despite these challenges, standardized datasets are critical for replicable and comparable research in . Where a subset of data is used from a larger collection, researchers should include details of the specific subset, including any breakdown into training and test data, or partitions for cross-validation. Where data from a new source is used, justification should be given for its inclusion, as well as some means for other researchers to replicate experiments on the same dataset." ], [ "To address specific sub-problems in , a number of shared tasks have been organized on problems such as in multilingual documents BIBREF378 , code-switched data BIBREF383 , discriminating between closely related languages BIBREF384 , and dialect and language variety identification in various languages BIBREF385 , BIBREF386 , BIBREF370 , BIBREF387 . Shared tasks are important for because they provide datasets and standardized evaluation methods that serve as benchmarks for the community. We summarize all shared tasks organized to date in sharedtasks.", "Generally, datasets for shared tasks have been made publicly available after the conclusion of the task, and are a good source of standardized evaluation data. However, the shared tasks to date have tended to target specific sub-problems in , and no general, broad-coverage datasets have been compiled. Widespread interest in over closely-related languages has resulted in a number of shared tasks that specifically tackle the issue. Some tasks have focused on varieties of a specific language. For example, the DEFT2010 shared task BIBREF385 examined varieties of French, requiring participants to classify French documents with respect to their geographical source, in addition to the decade in which they were published. Another example is the Arabic Dialect Identification (“ADI”) shared task at the VarDial workshop BIBREF126 , BIBREF386 , and the Arabic Multi-Genre Broadcast (“MGB”) Challenge BIBREF387 .", "Two shared tasks focused on a narrow group of languages using Twitter data. The first was TweetLID, a shared task on of Twitter messages according to six languages in common use in Spain, namely: Spanish, Portuguese, Catalan, English, Galician, and Basque (in order of the number of documents in the dataset) BIBREF388 , BIBREF389 . The organizers provided almost 35,000 Twitter messages, and in addition to the six monolingual tags, supported four additional categories: undetermined, multilingual (i.e. the message contains more than one language, without requiring the system to specify the component languages), ambiguous (i.e. the message is ambiguous between two or more of the six target languages), and other (i.e. the message is in a language other than the six target languages). The second shared task was the PAN lab on authorship profiling 2017 BIBREF370 . The PAN lab on authorship profiling is held annually and historically has focused on age, gender, and personality traits prediction in social media. In 2017 the competition introduced the inclusion of language varieties and dialects of Arabic, English, Spanish, and Portuguese,", "More ambitiously, the four editions of the Discriminating between Similar Languages (DSL) BIBREF384 , BIBREF6 , BIBREF317 , BIBREF386 shared tasks required participants to discriminate between a set of languages in several language groups, each consisting of highly-similar languages or national varieties of that language. The dataset, entitled DSL Corpus Collection (“DSLCC”) BIBREF77 , and the languages included are summarized in dslcc. Historically the best-performing systems BIBREF265 , BIBREF390 , BIBREF43 have approached the task via hierarchical classification, first predicting the language group, then the language within that group." ], [ "There are various reasons to investigate . Studies in approach the task from different perspectives, and with different motivations and application goals in mind. In this section, we briefly summarize what these motivations are, and how their specific needs differ.", "The oldest motivation for automatic is perhaps in conjunction with translation BIBREF27 . Automatic is used as a pre-processing step to determine what translation model to apply to an input text, whether it be by routing to a specific human translator or by applying MT. Such a use case is still very common, and can be seen in the Google Chrome web browser, where an built-in module is used to offer MT services to the user when the detected language of the web page being visited differs from the user's language settings.", "NLP components such as POS taggers and parsers tend to make a strong assumption that the input text is monolingual in a given language. Similarly to the translation case, can play an obvious role in routing documents written in different languages to NLP components tailored to those languages. More subtle is the case of documents with mixed multilingual content, the most commonly-occurring instance of which is foreign inclusion, where a document is predominantly in a single language (e.g. German or Japanese) but is interspersed with words and phrases (often technical terms) from a language such as English. For example, BIBREF391 found that around 6% of word tokens in German text sourced from the Internet are English inclusions. In the context of POS tagging, one strategy for dealing with inclusions is to have a dedicated POS for all foreign words, and force the POS tagger to perform both foreign inclusion detection and POS tag these words in the target language; this is the approach taken in the Penn POS tagset, for example BIBREF392 . An alternative strategy is to have an explicit foreign inclusion detection pre-processor, and some special handling of foreign inclusions. For example, in the context of German parsing, BIBREF391 used foreign inclusion predictions to restrict the set of (German) POS tags used to form a parse tree, and found that this approach substantially improved parser accuracy.", "Another commonly-mentioned use case is for multilingual document storage and retrieval. A document retrieval system (such as, but not limited to, a web search engine) may be required to index documents in multiple languages. In such a setting, it is common to apply at two points: (1) to the documents being indexed; and (2) to the queries being executed on the collection. Simple keyword matching techniques can be problematic in text-based document retrieval, because the same word can be valid in multiple languages. A classic example of such words (known as “false friends”) includes gift, which in German means “poison”. Performing on both the document and the query helps to avoid confusion between such terms, by taking advantage of the context in which it appears in order to infer the language. This has resulted in specific work in of web pages, as well as search engine queries. BIBREF393 and BIBREF394 give overviews of shared tasks specifically concentrating on language labeling of individual search query words. Having said this, in many cases, the search query itself does a sufficiently good job of selecting documents in a particular language, and overt is often not performed in mixed multilingual search contexts.", "Automatic has also been used to facilitate linguistic and other text-based research. BIBREF34 report that their motivation for developing a language identifier was “to find out how many web pages are written in a particular language”. Automatic has been used in constructing web-based corpora. The Crúbadán project BIBREF395 and the Finno-Ugric Languages and the Internet project BIBREF396 make use of automated techniques to gather linguistic resources for under-resourced languages. Similarly, the Online Database of INterlinear text (“ODIN”: BIBREF397 ) uses automated as one of the steps in collecting interlinear glossed text from the web for purposes of linguistic search and bootstrapping NLP tools.", "One challenge in collecting linguistic resources from the web is that documents can be multilingual (i.e. contain text in more than one language). This is problematic for standard methods, which assume that a document is written in a single language, and has prompted research into segmenting text by language, as well as word-level , to enable extraction of linguistic resources from multilingual documents. A number of shared tasks discussed in detail in evaluation:sharedtasks included data from social media. Examples are the TweetLID shared task on tweet held at SEPLN 2014 BIBREF388 , BIBREF389 , the data sets used in the first and second shared tasks on in code-switched data which were partially taken from Twitter BIBREF383 , BIBREF398 , and the third edition of the DSL shared task which contained two out-of-domain test sets consisting of tweets BIBREF317 . The 5th edition of the PAN at CLEF author profiling task included language variety identification for tweets BIBREF370 . There has also been research on identifying the language of private messages between eBay users BIBREF399 , presumably as a filtering step prior to more in-depth data analysis." ], [ "An “off-the-shelf” language identifier is software that is distributed with pre-trained models for a number of languages, so that a user is not required to provide training data before using the system. Such a setup is highly attractive to many end-users of automatic whose main interest is in utilizing the output of a language identifier rather than implementing and developing the technique. To this end, a number of off-the-shelf language identifiers have been released over time. Many authors have evaluated these off-the-shelf identifiers, including a recent evaluation involving 13 language identifiers which was carried out by BIBREF400 . In this section, we provide a brief summary of open-source or otherwise free systems that are available, as well as the key characteristics of each system. We have also included dates of when the software has been last updated as of October 2018.", "TextCat is the most well-known Perl implementation of the out-of-place method, it lists models for 76 languages in its off-the-shelf configuration; the program is not actively maintained. TextCat is not the only example of an off-the-shelf implementation of the out-of-place method: other implementations include libtextcat with 76 language models, JTCL with 15 languages, and mguesser with 104 models for different language-encoding pairs. The main issue addressed by later implementations is classification speed: TextCat is implemented in Perl and is not optimized for speed, whereas implementations such as libtextcat and mguesser have been specifically written to be fast and efficient. whatlang-rs uses an algorithm based on character trigrams and refers the user to the BIBREF7 article. It comes pre-trained with 83 languages.", "is the language identifier embedded in the Google Chrome web browser. It uses a NB classifier, and script-specific classification strategies. assumes that all the input is in UTF-8, and assigns the responsibility of encoding detection and transcoding to the user. uses Unicode information to determine the script of the input. also implements a number of pre-processing heuristics to help boost performance on its target domain (web pages), such as stripping character sequences like .jpg. The standard implementation supports 83 languages, and an extended model is also available, that supports 160 languages.", "is a Java library that implements a language identifier based on a NB classifier trained over character . The software comes with pre-trained models for 53 languages, using data from Wikipedia. makes use of a range of normalization heuristics to improve the performance on particular languages, including: (1) clustering of Chinese/Japanese/Korean characters to reduce sparseness; (2) removal of “language-independent” characters, and other text normalization; and (3) normalization of Arabic characters.", "is a Python implementation of the method described by BIBREF150 , which exploits training data for the same language across multiple different sources of text to identify sequences of characters that are strongly predictive of a given language, regardless of the source of the text. This feature set is combined with a NB classifier, and is distributed with a pre-trained model for 97 languages prepared using data from 5 different text sources. BIBREF151 provide an empirical comparison of to , and and find that it compares favorably both in terms of accuracy and classification speed. There are also implementations of the classifier component (but not the training portion) of in Java, C, and JavaScript.", " BIBREF153 uses a vector-space model with per-feature weighting on character sequences. One particular feature of is that it uses discriminative training in selecting features, i.e. it specifically makes use of features that are strong evidence against a particular language, which is generally not captured by NB models. Another feature of is that it uses inter-string smoothing to exploit sentence-level locality in making language predictions, under the assumption that adjacent sentences are likely to be in the same language. BIBREF153 reports that this substantially improves the accuracy of the identifier. Another distinguishing feature of is that it comes pre-trained with data for 1400 languages, which is the highest number by a large margin of any off-the-shelf system.", "whatthelang is a recent language identifier written in Python, which utilizes the FastText NN-based text classification algorithm. It supports 176 languages.", "implements an off-the-shelf classifier trained using Wikipedia data, covering 122 languages. Although not described as such, the actual classification algorithm used is a linear model, and is thus closely related to both NB and a cosine-based vector space model.", "In addition to the above-mentioned general-purpose language identifiers, there have also been efforts to produce pre-trained language identifiers targeted specifically at Twitter messages. is a Twitter-specific tool with built-in models for 19 languages. It uses a document representation based on tries BIBREF401 . The algorithm is a LR classifier using all possible substrings of the data, which is important to maximize the available information from the relatively short Twitter messages.", " BIBREF152 provides a comparison of 8 off-the-shelf language identifiers applied without re-training to Twitter messages. One issue they report is that comparing the accuracy of off-the-shelf systems is difficult because of the different subset of languages supported by each system, which may also not fully cover the languages present in the target data. The authors choose to compare accuracy over the full set of languages, arguing that this best reflects the likely use-case of applying an off-the-shelf system to new data. They find that the best individual systems are , and , but that slightly higher accuracy can be attained by a simple voting-based ensemble classifier involving these three systems.", "In addition to this, commercial or other closed-source language identifiers and language identifier services exist, of which we name a few. The Polyglot 3000 and Lextek Language Identifier are standalone language identifiers for Windows. Open Xerox Language Identifier is a web service with available REST and SOAP APIs." ], [ "Several papers have catalogued open issues in BIBREF327 , BIBREF382 , BIBREF1 , BIBREF334 , BIBREF32 , BIBREF324 , BIBREF317 . Some of the issues, such as text representation (features) and choice of algorithm (methods), have already been covered in detail in this survey. In this section, we synthesize the remaining issues into a single section, and also add new issues that have not been discussed in previous work. For each issue, we review related work and suggest promising directions for future work." ], [ "Text preprocessing (also known as normalization) is an umbrella term for techniques where an automatic transformation is applied to text before it is presented to a classifier. The aim of such a process is to eliminate sources of variation that are expected to be confounding factors with respect to the target task. Text preprocessing is slightly different from data cleaning, as data cleaning is a transformation applied only to training data, whereas normalization is applied to both training and test data. BIBREF1 raise text preprocessing as an outstanding issue in , arguing that its effects on the task have not been sufficiently investigated. In this section, we summarize the normalization strategies that have been proposed in the literature.", "Case folding is the elimination of capitalization, replacing characters in a text with either their lower-case or upper-case forms. Basic approaches generally map between [a-z] and [A-Z] in the ASCII encoding, but this approach is insufficient for extended Latin encodings, where diacritics must also be appropriately handled. A resource that makes this possible is the Unicode Character Database (UCD) which defines uppercase, lowercase and titlecase properties for each character, enabling automatic case folding for documents in a Unicode encoding such as UTF-8.", "Range compression is the grouping of a range of characters into a single logical set for counting purposes, and is a technique that is commonly used to deal with the sparsity that results from character sets for ideographic languages, such as Chinese, that may have thousands of unique “characters”, each of which is observed with relatively low frequency. BIBREF402 use such a technique where all characters in a given range are mapped into a single “bucket”, and the frequency of items in each bucket is used as a feature to represent the document. Byte-level representations of encodings that use multi-byte sequences to represent codepoints achieve a similar effect by “splitting” codepoints. In encodings such as UTF-8, the codepoints used by a single language are usually grouped together in “code planes”, where each codepoint in a given code plane shares the same upper byte. Thus, even though the distribution over codepoints may be quite sparse, when the byte-level representation uses byte sequences that are shorter than the multi-byte sequence of a codepoint, the shared upper byte will be predictive of specific languages.", "Cleaning may also be applied, where heuristic rules are used to remove some data that is perceived to hinder the accuracy of the language identifier. For example, BIBREF34 identify HTML entities as a candidate for removal in document cleaning, on the basis that classifiers trained on data which does not include such entities may drop in accuracy when applied to raw HTML documents. includes heuristics such as expanding HTML entities, deleting digits and punctuation, and removing SGML-like tags. Similarly, also removes “language-independent characters” such as numbers, symbols, URLs, and email addresses. It also removes words that are all-capitals and tries to remove other acronyms and proper names using heuristics.", "In the domain of Twitter messages, BIBREF313 remove links, usernames, smilies, and hashtags (a Twitter-specific “tagging” feature), arguing that these entities are language independent and thus should not feature in the model. BIBREF136 address of web pages, and report removing HTML formatting, and applying stopping using a small stopword list. BIBREF59 carry out experiments on the ECI multilingual corpus and report removing punctuation, space characters, and digits.", "The idea of preprocessing text to eliminate domain-specific “noise” is closely related to the idea of learning domain-independent characteristics of a language BIBREF150 . One difference is that normalization is normally heuristic-driven, where a manually-specified set of rules is used to eliminate unwanted elements of the text, whereas domain-independent text representations are data-driven, where text from different sources is used to identify the characteristics that a language shares between different sources. Both approaches share conceptual similarities with problems such as content extraction for web pages. In essence, the aim is to isolate the components of the text that actually represent language, and suppress the components that carry other information. One application is the language-aware extraction of text strings embedded in binary files, which has been shown to perform better than conventional heuristic approaches BIBREF36 . Future work in this area could focus specifically on the application of language-aware techniques to content extraction, using models of language to segment documents into textual and non-textual components. Such methods could also be used to iteratively improve itself by improving the quality of training data." ], [ "is further complicated when we consider that some languages can be written in different orthographies (e.g. Bosnian and Serbian can be written in both Latin and Cyrillic script). Transliteration is another phenomenon that has a similar effect, whereby phonetic transcriptions in another script are produced for particular languages. These transcriptions can either be standardized and officially sanctioned, such as the use of Hanyu Pinyin for Chinese, or may also emerge irregularly and organically as in the case of arabizi for Arabic BIBREF403 . BIBREF1 identify variation in the encodings and scripts used by a given language as an open issue in , pointing out that early work tended to focus on languages written using a romanized script, and suggesting that dealing with issues of encoding and orthography adds substantial complexity to the task. BIBREF34 discuss the relative difficulties of discriminating between languages that vary in any combination of encoding, script and language family, and give examples of pairs of languages that fall into each category.", "across orthographies and transliteration is an area that has not received much attention in work to date, but presents unique and interesting challenges that are suitable targets for future research. An interesting and unexplored question is whether it is possible to detect that documents in different encodings or scripts are written in the same language, or what language a text is transliterated from, without any a-priori knowledge of the encoding or scripts used. One possible approach to this could be to take advantage of standard orderings of alphabets in a language – the pattern of differences between adjacent characters should be consistent across encodings, though whether this is characteristic of any given language requires exploration." ], [ " BIBREF1 paint a fairly bleak picture of the support for low-resource languages in automatic . This is supported by the arguments of BIBREF382 who detail specific issues in building hugely multilingual datasets. BIBREF404 also specifically called for research into automatic for low-density languages. Ethnologue BIBREF0 lists a total of 7099 languages. BIBREF382 describe the Ethnologue in more detail, and discuss the role that plays in other aspects of supporting minority languages, including detecting and cataloging resources. The problem is circular: methods are typically supervised, and need training data for each language to be covered, but the most efficient way to recover such data is through methods.", "A number of projects are ongoing with the specific aim of gathering linguistic data from the web, targeting as broad a set of languages as possible. One such project is the aforementioned ODIN BIBREF361 , BIBREF397 , which aims to collect parallel snippets of text from Linguistics articles published on the web. ODIN specifically targets articles containing Interlinear Glossed Text (IGT), a semi-structured format for presenting text and a corresponding gloss that is commonly used in Linguistics.", "Other projects that exist with the aim of creating text corpora for under-resourced languages by crawling the web are the Crúbadán project BIBREF395 and SeedLing BIBREF405 . The Crúbadán crawler uses seed data in a target language to generate word lists that in turn are used as queries for a search engine. The returned documents are then compared with the seed resource via an automatic language identifier, which is used to eliminate false positives. BIBREF395 reports that corpora for over 400 languages have been built using this method. The SeedLing project crawls texts from several web sources which has resulted in a total of 1451 languages from 105 language families. According to the authors, this represents 19% of the world's languages.", "Much recent work on multilingual documents (openissues:multilingual) has been done with support for minority languages as a key goal. One of the common problems with gathering linguistic data from the web is that the data in the target language is often embedded in a document containing data in another language. This has spurred recent developments in text segmentation by language and word-level . BIBREF326 present a method to detect documents that contain text in more than one language and identify the languages present with their relative proportions in the document. The method is evaluated on real-world data from a web crawl targeted to collect documents for specific low-density languages.", "for low-resource languages is a promising area for future work. One of the key questions that has not been clearly answered is how much data is needed to accurately model a language for purposes of . Work to date suggests that there may not be a simple answer to this question as accuracy varies according to the number and variety of languages modeled BIBREF32 , as well as the diversity of data available to model a specific language BIBREF150 ." ], [ "Early research in tended to focus on a very limited number of languages (sometimes as few as 2). This situation has improved somewhat with many current off-the-shelf language identifiers supporting on the order of 50–100 languages (ots). The standout in this regard is BIBREF101 , supporting 1311 languages in its default configuration. However, evaluation of the identifier of BIBREF153 on a different domain found that the system suffered in terms of accuracy because it detected many languages that were not present in the test data BIBREF152 .", " BIBREF397 describe the construction of web crawlers specifically targeting IGT, as well as the identification of the languages represented in the IGT snippets. for thousands of languages from very small quantities of text is one of the issues that they have had to tackle. They list four specific challenges for in ODIN: (1) the large number of languages; (2) “unseen” languages that appear in the test data but not in training data; (3) short target sentences; and (4) (sometimes inconsistent) transliteration into Latin text. Their solution to this task is to take advantage of a domain-specific feature: they assume that the name of the language that they are extracting must appear in the document containing the IGT, and hence treat this as a co-reference resolution problem. They report that this approach significantly outperforms the text-based approach in this particular problem setting.", "An interesting area to explore is the trade-off between the number of languages supported and the accuracy per-language. From existing results it is not clear if it is possible to continue increasing the number of languages supported without adversely affecting the average accuracy, but it would be useful to quantify if this is actually the case across a broad range of text sources. mostlanguages lists the articles where the with more than 30 languages has been investigated." ], [ "“Unseen” languages are languages that we do not have training data for but may nonetheless be encountered by a system when applied to real-world data. Dealing with languages for which we do not have training data has been identified as an issue by BIBREF1 and has also been mentioned by BIBREF361 as a specific challenge in harvesting linguistic data from the web. BIBREF233 use an unlabeled training set with a labeled evaluation set for token-level code switching identification between Modern Standard Arabic (MSA) and dialectal Arabic. They utilize existing dictionaries and also a morphological analyzer for MSA, so the system is supported by extensive external knowledge sources. The possibility to use unannotated training material is nonetheless a very useful feature.", "Some authors have attempted to tackle the unseen language problem through attempts at unsupervised labeling of text by language. BIBREF225 uses an unsupervised clustering algorithm to separate a multilingual corpus into groups corresponding to languages. She uses singular value decomposition (SVD) to first identify the words that discriminate between documents and then to separate the terms into highly correlating groups. The documents grouped together by these discriminating terms are merged and the process is repeated until the wanted number of groups (corresponding to languages) is reached. BIBREF412 also presents an approach to unseen language problem, building graphs of co-occurrences of words in sentences, and then partitioning the graph using a custom graph-clustering algorithm which labels each word in the cluster with a single label. The number of labels is initialized to be the same as the number of words, and decreases as the algorithm is recursively applied. After a small number of iterations (the authors report 20), the labels become relatively stable and can be interpreted as cluster labels. Smaller clusters are then discarded, and the remaining clusters are interpreted as groups of words for each language. BIBREF413 compared the Chinese Whispers algorithm of BIBREF412 and Graclus clustering on unsupervised Tweet . They conclude that Chinese Whispers is better suited to . BIBREF414 used Fuzzy ART NNs for unsupervised language clustering for documents in Arabic, Persian, and Urdu. In Fuzzy ART, the clusters are also dynamically updated during the identification process.", " BIBREF415 also tackle the unseen language problem through clustering. They use a character representation for text, and a clustering algorithm that consists of an initial INLINEFORM0 -means phase, followed by particle-swarm optimization. This produces a large number of small clusters, which are then labeled by language through a separate step. BIBREF240 used co-occurrences of words with INLINEFORM1 -means clustering in word-level unsupervised . They used a Dirichlet process Gaussian mixture model (“DPGMM”), a non-parametric variant of a GMM, to automatically determine the number of clusters, and manually labeled the language of each cluster. BIBREF249 also used INLINEFORM2 -means clustering, and BIBREF416 used the INLINEFORM3 -means clustering algorithm in a custom framework. BIBREF244 utilized unlabeled data to improve their system by using a CRF autoencoder, unsupervised word embeddings, and word lists.", "A different partial solution to the issue of unseen languages is to design the classifier to be able to output “unknown” as a prediction for language. This helps to alleviate one of the problems commonly associated with the presence of unseen languages – classifiers without an “unknown” facility are forced to pick a language for each document, and in the case of unseen languages, the choice may be arbitrary and unpredictable BIBREF412 . When is used for filtering purposes, i.e. to select documents in a single language, this mislabeling can introduce substantial noise into the data extracted; furthermore, it does not matter what or how many unseen languages there are, as long as they are consistently rejected. Therefore the “unknown” output provides an adequate solution to the unseen language problem for purposes of filtering.", "The easiest way to implement unknown language detection is through thresholding. Most systems internally compute a score for each language for an unknown text, so thresholding can be applied either with a global threshold BIBREF33 , a per-language threshold BIBREF34 , or by comparing the score for the top-scoring INLINEFORM0 -languages. The problem of unseen languages and open-set recognition was also considered by BIBREF270 , BIBREF84 , and BIBREF126 . BIBREF126 experiments with one-class classification (“OCC”) and reaches an F-score on 98.9 using OC-SVMs (SVMs trained only with data from one language) to discriminate between 10 languages.", "Another possible method for unknown language detection that has not been explored extensively in the literature, is the use of non-parametric mixture models based on Hierarchical Dirichlet Processes (“HDP”). Such models have been successful in topic modeling, where an outstanding issue with the popular LDA model is the need to specify the number of topics in advance. BIBREF326 introduced an approach to detecting multilingual documents that uses a model very similar to LDA, where languages are analogous to topics in the LDA model. Using a similar analogy, an HDP-based model may be able to detect documents that are written in a language that is not currently modeled by the system. BIBREF24 used LDA to cluster unannotated tweets. Recently BIBREF417 used LDA in unsupervised sentence-level . They manually identified the languages of the topics created with LDA. If there were more topics than languages then the topics in the same language were merged.", "Filtering, a task that we mentioned earlier in this section, is a very common application of , and it is therefore surprising that there is little research on filtering for specific languages. Filtering is a limit case of with unseen languages, where all languages but one can be considered unknown. Future work could examine how useful different types of negative evidence are for filtering – if we want to detect English documents, e.g., are there empirical advantages in having distinct models of Italian and German (even if we don't care about the distinction between the two languages), or can we group them all together in a single “negative” class? Are we better off including as many languages as possible in the negative class, or can we safely exclude some?" ], [ "Multilingual documents are documents that contain text in more than one language. In constructing the hrWac corpus, BIBREF97 found that 4% of the documents they collected contained text in more than one language. BIBREF329 report that web pages in many languages contain formulaic strings in English that do not actually contribute to the content of the page, but may nonetheless confound attempts to identify multilingual documents. Recent research has investigated how to make use of multilingual documents from sources such as web crawls BIBREF40 , forum posts BIBREF263 , and microblog messages BIBREF418 . However, most methods assume that a document contains text from a single language, and so are not directly applicable to multilingual documents.", "Handling of multilingual documents has been named as an open research question BIBREF1 . Most NLP techniques presuppose monolingual input data, so inclusion of data in foreign languages introduces noise, and can degrade the performance of NLP systems. Automatic detection of multilingual documents can be used as a pre-filtering step to improve the quality of input data. Detecting multilingual documents is also important for acquiring linguistic data from the web, and has applications in mining bilingual texts for statistical MT from online resources BIBREF418 , or to study code-switching phenomena in online communications. There has also been interest in extracting text resources for low-density languages from multilingual web pages containing both the low-density language and another language such as English.", "The need to handle multilingual documents has prompted researchers to revisit the granularity of . Many researchers consider document-level to be relatively easy, and that sentence-level and word-level are more suitable targets for further research. However, word-level and sentence-level tokenization are not language-independent tasks, and for some languages are substantially harder than others BIBREF419 .", " BIBREF112 is a language identifier that supports identification of multilingual documents. The system is based on a vector space model using cosine similarity. for multilingual documents is performed through the use of virtual mixed languages. BIBREF112 shows how to construct vectors representative of particular combinations of languages independent of the relative proportions, and proposes a method for choosing combinations of languages to consider for any given document. One weakness of this approach is that for exhaustive coverage, this method is factorial in the number of languages, and as such intractable for a large set of languages. Furthermore, calculating the parameters for the virtual mixed languages becomes infeasibly complex for mixtures of more than 3 languages.", "As mentioned previously, BIBREF326 propose an LDA-inspired method for multilingual documents that is able to identify that a document is multilingual, identify the languages present and estimate the relative proportions of the document written in each language. To remove the need to specify the number of topics (or in this case, languages) in advance, BIBREF326 use a greedy heuristic that attempts to find the subset of languages that maximizes the posterior probability of a target document. One advantage of this approach is that it is not constrained to 3-language combinations like the method of BIBREF112 . Language set identification has also been considered by BIBREF34 , BIBREF407 , and BIBREF420 , BIBREF276 .", "To encourage further research on for multilingual documents, in the aforementioned shared task hosted by the Australiasian Language Technology Workshop 2010, discussed in evaluation:sharedtasks, participants were required to predict the language(s) present in a held-out test set containing monolingual and bilingual documents BIBREF378 . The dataset was prepared using data from Wikipedia, and bilingual documents were produced using a segment from an article in one language and a segment from the equivalent article in another language. Equivalence between articles was determined using the cross-language links embedded within each Wikipedia article. The winning entry BIBREF421 first built monolingual models from multilingual training data, and then applied them to a chunked version of the test data, making the final prediction a function of the prediction over chunks.", "Another approach to handling multilingual documents is to attempt to segment them into contiguous monolingual segments. In addition to identifying the languages present, this requires identifying the locations of boundaries in the text which mark the transition from one language to another. Several methods for supervised language segmentation have been proposed. BIBREF33 generalized a algorithm for monolingual documents by adding a dynamic programming algorithm based on a simple Markov model of multilingual documents. More recently, multilingual algorithms have also been presented by BIBREF140 , BIBREF73 , BIBREF74 , BIBREF106 , and BIBREF82 ." ], [ "of short strings is known to be challenging for existing techniques. BIBREF37 tested four different classification methods, and found that all have substantially lower accuracy when applied to texts of 25 characters compared with texts of 125 characters. These findings were later strengthened, for example, by BIBREF145 and BIBREF148 .", " BIBREF195 describes a method specifically targeted at short texts that augments a dictionary with an affix table, which was tested over synthetic data derived from a parallel bible corpus. BIBREF145 focus on messages of 5–21 characters, using language models over data drawn the from Universal Declaration of Human Rights (UDHR). We would expect that generic methods for of short texts should be effective in any domain where short texts are found, such as search engine queries or microblog messages. However, BIBREF195 and BIBREF145 both only test their systems in a single domain: bible texts in the former case, and texts from the UDHR in the latter case. Other research has shown that results do not trivially generalize across domains BIBREF32 , and found that in UDHR documents is relatively easy BIBREF301 . For both bible and UDHR data, we expect that the linguistic content is relatively grammatical and well-formed, an expectation that does not carry across to domains such as search engine queries and microblogs. Another “short text” domain where has been studied is of proper names. BIBREF306 identify this as an issue. BIBREF422 found that of names is more accurate than of generic words of equivalent length.", " BIBREF299 raise an important criticism of work on Twitter messages to date: only a small number of European languages has been considered. BIBREF299 expand the scope of for Twitter, covering nine languages across Cyrillic, Arabic and Devanagari scripts. BIBREF152 expand the evaluation further, introducing a dataset of language-labeled Twitter messages across 65 languages constructed using a semi-automatic method that leverages user identity to avoid inducing a bias in the evaluation set towards messages that existing systems are able to identify correctly. BIBREF152 also test a 1300-language model based on BIBREF153 , but find that it performs relatively poorly in the target domain due to a tendency to over-predict low-resource languages.", "Work has also been done on of single words in a document, where the task is to label each word in the document with a specific language. Work to date in this area has assumed that word tokenization can be carried out on the basis of whitespace. BIBREF35 explore word-level in the context of segmenting a multilingual document into monolingual segments. Other work has assumed that the languages present in the document are known in advance.", "Conditional random fields (“CRFs”: BIBREF423 ) are a sequence labeling method most often used in for labeling the language of individual words in a multilingual text. CRFs can be thought of as a finite state model with probabilistic transition probabilities optimised over pre-defined cliques. They can use any observations made from the test document as features, including language labels given by monolingual language identifiers for words. BIBREF40 used a CRF trained with generalized expectation criteria, and found it to be the most accurate of all methods tested (NB, LR, HMM, CRF) at word-level . BIBREF40 introduce a technique to estimate the parameters using only monolingual data, an important consideration as there is no readily-available collection of manually-labeled multilingual documents with word-level annotations. BIBREF263 present a two-pass approach to processing Turkish-Dutch bilingual documents, where the first pass labels each word independently and the second pass uses the local context of a word to further refine the predictions. BIBREF263 achieved 97,6% accuracy on distinguishing between the two languages using a linear-chain CRF. BIBREF180 are the only ones so far to use a CRF for of monolingual texts. With a CRF, they attained a higher F-score in German dialect identification than NB or an ensemble consisting of NB, CRF, and SVM. Lately CRFs were also used for by BIBREF52 and BIBREF44 . BIBREF296 investigate of individual words in the context of code switching. They find that smoothing of models substantially improves accuracy of a language identifier based on a NB classifier when applied to individual words." ], [ "While one line of research into has focused on pushing the boundaries of how many languages are supported simultaneously by a single system BIBREF382 , BIBREF36 , BIBREF153 , another has taken a complementary path and focused on in groups of similar languages. Research in this area typically does not make a distinction between languages, varieties and dialects, because such terminological differences tend to be politically rather than linguistically motivated BIBREF424 , BIBREF382 , BIBREF5 , and from an NLP perspective the challenges faced are very similar.", "for closely-related languages, language varieties, and dialects has been studied for Malay–Indonesian BIBREF332 , Indian languages BIBREF114 , South Slavic languages BIBREF377 , BIBREF98 , BIBREF4 , BIBREF425 , Serbo-Croatian dialects BIBREF426 , English varieties BIBREF278 , BIBREF45 , Dutch–Flemish BIBREF53 , Dutch dialects (including a temporal dimension) BIBREF427 , German Dialects BIBREF428 Mainland–Singaporean–Taiwanese Chinese BIBREF429 , Portuguese varieties BIBREF5 , BIBREF259 , Spanish varieties BIBREF70 , BIBREF147 , French varieties BIBREF430 , BIBREF431 , BIBREF432 , languages of the Iberian Peninsula BIBREF388 , Romanian dialects BIBREF120 , and Arabic dialects BIBREF41 , BIBREF78 , BIBREF433 , BIBREF75 , BIBREF434 , the last of which we discuss in more detail in this section. As to off-the-shelf tools which can identify closely-related languages, BIBREF79 released a system trained to identify 27 languages, including 10 language varieties. Closely-related languages, language varieties, and dialects have also been the focus of a number of shared tasks in recent years as discussed in evaluation:sharedtasks.", "Similar languages are a known problem for existing language identifiers BIBREF332 , BIBREF435 . BIBREF34 identify language pairs from the same language family that also share a common script and the same encoding, as the most difficult to discriminate. BIBREF98 report that achieves only 45% accuracy when trained and tested on 3-way Bosnian/Serbian/Croatian dataset. BIBREF278 found that methods are not competitive with conventional word-based document categorization methods in distinguishing between national varieties of English. BIBREF332 reports that a character trigram model is able to distinguish Malay/Indonesian from English, French, German, and Dutch, but handcrafted rules are needed to distinguish between Malay and Indonesian. One kind of rule is the use of “exclusive words” that are known to occur in only one of the languages. A similar idea is used by BIBREF98 , in automatically learning a “blacklist” of words that have a strong negative correlation with a language – i.e. their presence implies that the text is not written in a particular language. In doing so, they achieve an overall accuracy of 98%, far surpassing the 45% of . BIBREF153 also adopts such “discriminative training” to make use of negative evidence in .", " BIBREF435 observed that general-purpose approaches to typically use a character representation of text, but successful approaches for closely-related languages, varieties, and dialects seem to favor a word-based representation or higher-order (e.g. 4-grams, 5-grams, and even 6-grams) that often cover whole words BIBREF429 , BIBREF98 , BIBREF278 , BIBREF343 . The study compared character with word-based representations for over varieties of Spanish, Portuguese and French, and found that word-level models performed better for varieties of Spanish, but character models perform better in the case of Portuguese and French.", "To train accurate and robust systems that discriminate between language varieties or similar languages, models should ideally be able to capture not only lexical but more abstract systemic differences between languages. One way to achieve this, is by using features that use de-lexicalized text representations (e.g. by substituting named entities or content words by placeholders), or at a higher level of abstraction, using POS tags or other morphosyntactic information BIBREF70 , BIBREF390 , BIBREF43 , or even adversarial machine learning to modify the learned representations to remove such artefacts BIBREF358 . Finally, an interesting research direction could be to combine work on closely-related languages with the analysis of regional or dialectal differences in language use BIBREF436 , BIBREF437 , BIBREF438 , BIBREF432 .", "In recent years, there has been a significant increase of interest in the computational processing of Arabic. This is evidenced by a number of research papers in several NLP tasks and applications including the identification/discrimination of Arabic dialects BIBREF41 , BIBREF78 . Arabic is particularly interesting for researchers interested in language variation due to the fact that the language is often in a diaglossic situation, in which the standard form (Modern Standard Arabic or “MSA”) coexists with several regional dialects which are used in everyday communication.", "Among the studies published on the topic of Arabic , BIBREF41 proposed a supervised approach to distinguish between MSA and Egyptian Arabic at the sentence level, and achieved up to 85.5% accuracy over an Arabic online commentary dataset BIBREF379 . BIBREF433 achieved higher results over the same dataset using a linear-kernel SVM classifier.", " BIBREF78 compiled a dataset containing MSA, Egyptian Arabic, Gulf Arabic and Levantine Arabic, and used it to investigate three classification tasks: (1) MSA and dialectal Arabic; (2) four-way classification – MSA, Egyptian Arabic, Gulf Arabic, and Levantine Arabic; and (3) three-way classification – Egyptian Arabic, Gulf Arabic, and Levantine Arabic.", " BIBREF439 explores the use of sentence-level Arabic dialect identification as a pre-processor for MT, in customizing the selection of the MT model used to translate a given sentence to the dialect it uses. In performing dialect-specific MT, the authors achieve an improvement of 1.0% BLEU score compared with a baseline system which does not differentiate between Arabic dialects.", "Finally, in addition to the above-mentioned dataset of BIBREF379 , there are a number of notable multi-dialect corpora of Arabic: a multi-dialect corpus of broadcast speeches used in the ADI shared task BIBREF440 ; a multi-dialect corpus of (informal) written Arabic containing newspaper comments and Twitter data BIBREF441 ; a parallel corpus of 2,000 sentences in MSA, Egyptian Arabic, Tunisian Arabic, Jordanian Arabic, Palestinian Arabic, and Syrian Arabic, in addition to English BIBREF442 ; a corpus of sentences in 18 Arabic dialects (corresponding to 18 different Arabic-speaking countries) based on data manually sourced from web forums BIBREF75 ; and finally two recently compiled multi-dialect corpora containing microblog posts from Twitter BIBREF241 , BIBREF443 .", "While not specifically targeted at identifying language varieties, BIBREF355 made the critical observation that when naively trained, systems tend to perform most poorly over language varieties from the lowest socio-economic demographics (focusing particularly on the case of English), as they tend to be most under-represented in training corpora. If, as a research community, we are interested in the social equitability of our systems, it is critical that we develop datasets that are truly representative of the global population, to better quantify and remove this effect. To this end, BIBREF355 detail a method for constructing a more representative dataset, and demonstrate the impact of training on such a dataset in terms of alleviating socio-economic bias." ], [ "One approach to is to build a generic language identifier that aims to correctly identify the language of a text without any information about the source of the text. Some work has specifically targeted across multiple domains, learning characteristics of languages that are consistent between different sources of text BIBREF150 . However, there are often domain-specific features that are useful for identifying the language of a text. In this survey, our primary focus has been on of digitally-encoded text, using only the text itself as evidence on which to base the prediction of the language. Within a text, there can sometimes be domain-specific peculiarities that can be used for . For example, BIBREF399 investigates of user-to-user messages in the eBay e-commerce portal. He finds that using only the first two and last two words of a message is sufficient for identifying the language of a message." ], [ "This article has presented a comprehensive survey on language identification of digitally-encoded text. We have shown that is a rich, complex, and multi-faceted problem that has engaged a wide variety of research communities. accuracy is critical as it is often the first step in longer text processing pipelines, so errors made in will propagate and degrade the performance of later stages. Under controlled conditions, such as limiting the number of languages to a small set of Western European languages and using long, grammatical, and structured text such as government documents as training data, it is possible to achieve near-perfect accuracy. This led many researchers to consider a solved problem, as argued by BIBREF2 . However, becomes much harder when taking into account the peculiarities of real-world data, such as very short documents (e.g. search engine queries), non-linguistic “noise” (e.g. HTML markup), non-standard use of language (e.g. as seen in social media data), and mixed-language documents (e.g. forum posts in multilingual web forums).", "Modern approaches to are generally data-driven and are based on comparing new documents with models of each target language learned from data. The types of models as well as the sources of training data used in the literature are diverse, and work to date has not compared and evaluated these in a systematic manner, making it difficult to draw broader conclusions about what the “best” method for actually is. We have attempted to synthesize results to date to identify a set of “best practices”, but these should be treated as guidelines and should always be considered in the broader context of a target application.", "Existing work on serves to illustrate that the scope and depth of the problem are much greater than they may first appear. In openissues, we discussed open issues in , identifying the key challenges, and outlining opportunities for future research. Far from being a solved problem, aspects of make it an archetypal learning task with subtleties that could be tackled by future work on supervised learning, representation learning, multi-task learning, domain adaptation, multi-label classification and other subfields of machine learning. We hope that this paper can serve as a reference point for future work in the area, both for providing insight into work to date, as well as pointing towards the key aspects that merit further investigation.", "This research was supported in part by the Australian Research Council, the Kone Foundation and the Academy of Finland. We would like to thank Kimmo Koskenniemi for many valuable discussions and comments concerning the early phases of the features and the methods sections." ] ], "section_name": [ "Introduction", "as Text Categorization", "Previous Surveys", "A Brief History of ", "On Notation", "An Archetypal Language Identifier", "On the Equivalence of Methods", "Features", "Bytes and Encodings", "Characters", "Character Combinations", "Morphemes, Syllables and Chunks", "Words", "Word Combinations", "Feature Smoothing", "Methods", "Decision Rules", "Decision Trees", "Simple Scoring", "Sum or Average of Values", "Product of Values", "Similarity Measures", "Discriminant Functions", "Support Vector Machines (“SVMs”)", "Neural Networks (“NN”)", "Other Methods", "Ensemble Methods", "Empirical Evaluation", "Standardized Evaluation for ", "Corpora Used for Evaluation", "Shared Tasks", "Application Areas", "Off-the-Shelf Language Identifiers", "Research Directions and Open Issues in ", "Text Preprocessing", "Orthography and Transliteration", "Supporting Low-Resource Languages", "Number of Languages", "“Unseen” Languages and Unsupervised ", "Multilingual Documents", "Short Texts", "Similar Languages, Language Varieties, and Dialects", "Domain-specific ", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "487eb0ec2a10b179d7312cc807155ea0d3f21f1a" ], "answer": [ { "evidence": [ "The most common approach is to treat the task as a document-level classification problem. Given a set of evaluation documents, each having a known correct label from a closed set of labels (often referred to as the “gold-standard”), and a predicted label for each document from the same set, the document-level accuracy is the proportion of documents that are correctly labeled over the entire evaluation collection. This is the most frequently reported metric and conveys the same information as the error rate, which is simply the proportion of documents that are incorrectly labeled (i.e. INLINEFORM0 ).", "Authors sometimes provide a per-language breakdown of results. There are two distinct ways in which results are generally summarized per-language: (1) precision, in which documents are grouped according to their predicted language; and (2) recall, in which documents are grouped according to what language they are actually written in. Earlier work has tended to only provide a breakdown based on the correct label (i.e. only reporting per-language recall). This gives us a sense of how likely a document in any given language is to be classified correctly, but does not give an indication of how likely a prediction for a given language is of being correct. Under the monolingual assumption (i.e. each document is written in exactly one language), this is not too much of a problem, as a false negative for one language must also be a false positive for another language, so precision and recall are closely linked. Nonetheless, authors have recently tended to explicitly provide both precision and recall for clarity. It is also common practice to report an F-score INLINEFORM0 , which is the harmonic mean of precision and recall. The F-score (also sometimes called F1-score or F-measure) was developed in IR to measure the effectiveness of retrieval with respect to a user who attaches different relative importance to precision and recall BIBREF376 . When used as an evaluation metric for classification tasks, it is common to place equal weight on precision and recall (hence “F1”-score, in reference to the INLINEFORM1 hyper-parameter, which equally weights precision and recall when INLINEFORM2 )." ], "extractive_spans": [ "document-level accuracy", "precision", "recall", "F-score" ], "free_form_answer": "", "highlighted_evidence": [ "Given a set of evaluation documents, each having a known correct label from a closed set of labels (often referred to as the “gold-standard”), and a predicted label for each document from the same set, the document-level accuracy is the proportion of documents that are correctly labeled over the entire evaluation collection. This is the most frequently reported metric and conveys the same information as the error rate, which is simply the proportion of documents that are incorrectly labeled (i.e. INLINEFORM0 ).", "There are two distinct ways in which results are generally summarized per-language: (1) precision, in which documents are grouped according to their predicted language; and (2) recall, in which documents are grouped according to what language they are actually written in.", "It is also common practice to report an F-score INLINEFORM0 , which is the harmonic mean of precision and recall." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "715e5291266c5a62fee2fc907a3e34c3fb73f103" ], "answer": [ { "evidence": [ "TextCat is the most well-known Perl implementation of the out-of-place method, it lists models for 76 languages in its off-the-shelf configuration; the program is not actively maintained. TextCat is not the only example of an off-the-shelf implementation of the out-of-place method: other implementations include libtextcat with 76 language models, JTCL with 15 languages, and mguesser with 104 models for different language-encoding pairs. The main issue addressed by later implementations is classification speed: TextCat is implemented in Perl and is not optimized for speed, whereas implementations such as libtextcat and mguesser have been specifically written to be fast and efficient. whatlang-rs uses an algorithm based on character trigrams and refers the user to the BIBREF7 article. It comes pre-trained with 83 languages.", "is the language identifier embedded in the Google Chrome web browser. It uses a NB classifier, and script-specific classification strategies. assumes that all the input is in UTF-8, and assigns the responsibility of encoding detection and transcoding to the user. uses Unicode information to determine the script of the input. also implements a number of pre-processing heuristics to help boost performance on its target domain (web pages), such as stripping character sequences like .jpg. The standard implementation supports 83 languages, and an extended model is also available, that supports 160 languages.", "is a Java library that implements a language identifier based on a NB classifier trained over character . The software comes with pre-trained models for 53 languages, using data from Wikipedia. makes use of a range of normalization heuristics to improve the performance on particular languages, including: (1) clustering of Chinese/Japanese/Korean characters to reduce sparseness; (2) removal of “language-independent” characters, and other text normalization; and (3) normalization of Arabic characters.", "is a Python implementation of the method described by BIBREF150 , which exploits training data for the same language across multiple different sources of text to identify sequences of characters that are strongly predictive of a given language, regardless of the source of the text. This feature set is combined with a NB classifier, and is distributed with a pre-trained model for 97 languages prepared using data from 5 different text sources. BIBREF151 provide an empirical comparison of to , and and find that it compares favorably both in terms of accuracy and classification speed. There are also implementations of the classifier component (but not the training portion) of in Java, C, and JavaScript.", "BIBREF153 uses a vector-space model with per-feature weighting on character sequences. One particular feature of is that it uses discriminative training in selecting features, i.e. it specifically makes use of features that are strong evidence against a particular language, which is generally not captured by NB models. Another feature of is that it uses inter-string smoothing to exploit sentence-level locality in making language predictions, under the assumption that adjacent sentences are likely to be in the same language. BIBREF153 reports that this substantially improves the accuracy of the identifier. Another distinguishing feature of is that it comes pre-trained with data for 1400 languages, which is the highest number by a large margin of any off-the-shelf system.", "whatthelang is a recent language identifier written in Python, which utilizes the FastText NN-based text classification algorithm. It supports 176 languages.", "implements an off-the-shelf classifier trained using Wikipedia data, covering 122 languages. Although not described as such, the actual classification algorithm used is a linear model, and is thus closely related to both NB and a cosine-based vector space model.", "In addition to the above-mentioned general-purpose language identifiers, there have also been efforts to produce pre-trained language identifiers targeted specifically at Twitter messages. is a Twitter-specific tool with built-in models for 19 languages. It uses a document representation based on tries BIBREF401 . The algorithm is a LR classifier using all possible substrings of the data, which is important to maximize the available information from the relatively short Twitter messages.", "BIBREF152 provides a comparison of 8 off-the-shelf language identifiers applied without re-training to Twitter messages. One issue they report is that comparing the accuracy of off-the-shelf systems is difficult because of the different subset of languages supported by each system, which may also not fully cover the languages present in the target data. The authors choose to compare accuracy over the full set of languages, arguing that this best reflects the likely use-case of applying an off-the-shelf system to new data. They find that the best individual systems are , and , but that slightly higher accuracy can be attained by a simple voting-based ensemble classifier involving these three systems.", "In addition to this, commercial or other closed-source language identifiers and language identifier services exist, of which we name a few. The Polyglot 3000 and Lextek Language Identifier are standalone language identifiers for Windows. Open Xerox Language Identifier is a web service with available REST and SOAP APIs." ], "extractive_spans": [], "free_form_answer": "Answer with content missing: (Names of many identifiers missing) TextCat, ChromeCLD, LangDetect, langid.py, whatlang, whatthelang, YALI, LDIG, Polyglot 3000, Lextek Language Identifier and Open Xerox Language Identifier.", "highlighted_evidence": [ "TextCat is the most well-known Perl implementation of the out-of-place method, it lists models for 76 languages in its off-the-shelf configuration; the program is not actively maintained.", "TextCat is the most well-known Perl implementation of the out-of-place method, it lists models for 76 languages in its off-the-shelf configuration; the program is not actively maintained. TextCat is not the only example of an off-the-shelf implementation of the out-of-place method: other implementations include libtextcat with 76 language models, JTCL with 15 languages, and mguesser with 104 models for different language-encoding pairs. The main issue addressed by later implementations is classification speed: TextCat is implemented in Perl and is not optimized for speed, whereas implementations such as libtextcat and mguesser have been specifically written to be fast and efficient. whatlang-rs uses an algorithm based on character trigrams and refers the user to the BIBREF7 article. It comes pre-trained with 83 languages.\n\nis the language identifier embedded in the Google Chrome web browser. It uses a NB classifier, and script-specific classification strategies. assumes that all the input is in UTF-8, and assigns the responsibility of encoding detection and transcoding to the user. uses Unicode information to determine the script of the input. also implements a number of pre-processing heuristics to help boost performance on its target domain (web pages), such as stripping character sequences like .jpg. The standard implementation supports 83 languages, and an extended model is also available, that supports 160 languages.\n\nis a Java library that implements a language identifier based on a NB classifier trained over character . The software comes with pre-trained models for 53 languages, using data from Wikipedia. makes use of a range of normalization heuristics to improve the performance on particular languages, including: (1) clustering of Chinese/Japanese/Korean characters to reduce sparseness; (2) removal of “language-independent” characters, and other text normalization; and (3) normalization of Arabic characters.\n\nis a Python implementation of the method described by BIBREF150 , which exploits training data for the same language across multiple different sources of text to identify sequences of characters that are strongly predictive of a given language, regardless of the source of the text. This feature set is combined with a NB classifier, and is distributed with a pre-trained model for 97 languages prepared using data from 5 different text sources. BIBREF151 provide an empirical comparison of to , and and find that it compares favorably both in terms of accuracy and classification speed. There are also implementations of the classifier component (but not the training portion) of in Java, C, and JavaScript.\n\nBIBREF153 uses a vector-space model with per-feature weighting on character sequences. One particular feature of is that it uses discriminative training in selecting features, i.e. it specifically makes use of features that are strong evidence against a particular language, which is generally not captured by NB models. Another feature of is that it uses inter-string smoothing to exploit sentence-level locality in making language predictions, under the assumption that adjacent sentences are likely to be in the same language. BIBREF153 reports that this substantially improves the accuracy of the identifier. Another distinguishing feature of is that it comes pre-trained with data for 1400 languages, which is the highest number by a large margin of any off-the-shelf system.\n\nwhatthelang is a recent language identifier written in Python, which utilizes the FastText NN-based text classification algorithm. It supports 176 languages.\n\nimplements an off-the-shelf classifier trained using Wikipedia data, covering 122 languages. Although not described as such, the actual classification algorithm used is a linear model, and is thus closely related to both NB and a cosine-based vector space model.\n\nIn addition to the above-mentioned general-purpose language identifiers, there have also been efforts to produce pre-trained language identifiers targeted specifically at Twitter messages. is a Twitter-specific tool with built-in models for 19 languages. It uses a document representation based on tries BIBREF401 . The algorithm is a LR classifier using all possible substrings of the data, which is important to maximize the available information from the relatively short Twitter messages.\n\nBIBREF152 provides a comparison of 8 off-the-shelf language identifiers applied without re-training to Twitter messages. One issue they report is that comparing the accuracy of off-the-shelf systems is difficult because of the different subset of languages supported by each system, which may also not fully cover the languages present in the target data. The authors choose to compare accuracy over the full set of languages, arguing that this best reflects the likely use-case of applying an off-the-shelf system to new data. They find that the best individual systems are , and , but that slightly higher accuracy can be attained by a simple voting-based ensemble classifier involving these three systems.\n\nIn addition to this, commercial or other closed-source language identifiers and language identifier services exist, of which we name a few. The Polyglot 3000 and Lextek Language Identifier are standalone language identifiers for Windows. Open Xerox Language Identifier is a web service with available REST and SOAP APIs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "what evaluation methods are discussed?", "what are the off-the-shelf systems discussed in the paper?" ], "question_id": [ "626873982852ec83c59193dd2cf73769bf77b3ed", "b3a09d2e3156c51bd5fdc110a2a00a67bb8c0e42" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Table 1: Excerpts from Wikipedia articles on NLP in different languages.", "Table 2: List of articles (2013–2017) where relative frequencies of character n-grams have been used as features. The columns indicate the length of the n-grams used. “***” indicates the empirically best n-gram length in that paper, and “**” the secondbest n-gram length. “*” indicates less effective n-gram lengths. “...” indicates that even higher n-grams were used.", "Table 3: List of recent articles where Markovian character n-grams have been used as features. The columns indicate the length of the n-grams used. “***” indicates the best and “**” the second-best n-gram length as reported in the article in question. “*” indicates less effective n-gram lengths.", "Table 4: Recent papers (2016–) where the frequency of character n-grams has been used to generate feature vectors. The columns indicate the length of the n-grams used, and the machine learning method(s) used. The relevant section numbers for the methods are mentioned in parentheses. “...” indicates that even more methods were used.", "Table 5: List of articles (2016-) where character n-grams of differing sizes have been used as features. The numbered columns indicate the length of the n-grams used. The method column indicates the method used with the n-grams. The relevant section numbers are mentioned in parentheses. “...” indicates that even more methods were used.", "Table 6: References (2016-) where prefixes and suffixes collected from a training corpus have been used for LI. The columns indicate the length of the prefixes and suffixes. The method column indicates the method used. The relevant section numbers are mentioned in parentheses. “...” indicates that even more methods were used.", "Table 7: Word characteristics used by Mustonen (1965).", "Table 8: References (2015–) where word n-grams have been used as features. The numbered columns indicate the length of the n-grams used. “***” indicates the best and “**” the second best n-gram length, as evaluated in the article in question. “*” indicates less effective n-gram lengths. Identical numbers of asterisks indicate that there was no clear order of effectiveness, or that all the n-gram lengths were used simultaneously. The method column indicates the method used. The relevant section numbers are mentioned in parentheses. “...” indicates that even more methods were used.", "Table 9: List of articles where word and character n-grams have been used as features. The numbered columns indicate the length of the word n-grams and char-column the length of character n-grams used. The method column indicates the method used. The relevant section numbers are mentioned in parentheses. “...” indicates that even more methods were used.", "Table 10: References where SVMs have been tested with different kernels. The columns indicate the kernels used. “dn” stands for polynomial kernel.", "Table 11: Published LI Datasets", "Table 12: List of LI shared tasks.", "Table 13: DSLCC: the languages included in each version of the corpus collection, grouped by language similarity.", "Table 14: Empirical evaluations with more than 30 languages." ], "file": [ "2-Table1-1.png", "12-Table2-1.png", "13-Table3-1.png", "15-Table4-1.png", "18-Table5-1.png", "20-Table6-1.png", "21-Table7-1.png", "24-Table8-1.png", "26-Table9-1.png", "43-Table10-1.png", "52-Table11-1.png", "54-Table12-1.png", "55-Table13-1.png", "63-Table14-1.png" ] }
[ "what are the off-the-shelf systems discussed in the paper?" ]
[ [ "1804.08186-Off-the-Shelf Language Identifiers-10", "1804.08186-Off-the-Shelf Language Identifiers-7", "1804.08186-Off-the-Shelf Language Identifiers-4", "1804.08186-Off-the-Shelf Language Identifiers-3", "1804.08186-Off-the-Shelf Language Identifiers-8", "1804.08186-Off-the-Shelf Language Identifiers-6", "1804.08186-Off-the-Shelf Language Identifiers-1", "1804.08186-Off-the-Shelf Language Identifiers-2" ] ]
[ "Answer with content missing: (Names of many identifiers missing) TextCat, ChromeCLD, LangDetect, langid.py, whatlang, whatthelang, YALI, LDIG, Polyglot 3000, Lextek Language Identifier and Open Xerox Language Identifier." ]
542
1909.05438
Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data. Our goal is to learn a neural semantic parser when only prior knowledge about a limited number of simple rules is available, without access to either annotated programs or execution results. Our approach is initialized by rules, and improved in a back-translation paradigm using generated question-program pairs from the semantic parser and the question generator. A phrase table with frequent mapping patterns is automatically derived, also updated as training progresses, to measure the quality of generated instances. We train the model with model-agnostic meta-learning to guarantee the accuracy and stability on examples covered by rules, and meanwhile acquire the versatility to generalize well on examples uncovered by rules. Results on three benchmark datasets with different domains and programs show that our approach incrementally improves the accuracy. On WikiSQL, our best model is comparable to the SOTA system learned from denotations.
{ "paragraphs": [ [ "Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains.", "This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples.", "We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training.", "We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%." ], [ "We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 .", "We consider three tasks. The first task is single-turn table-based semantic parsing, in which case INLINEFORM0 is a self-contained question, INLINEFORM1 is a SQL query in the form of “SELECT agg col INLINEFORM2 WHERE col INLINEFORM3 = val INLINEFORM4 AND ...”, and INLINEFORM5 is a web table consisting of multiple rows and columns. We use WikiSQL BIBREF8 as the testbed for this task. The second task is multi-turn table-based semantic parsing. Compared to the first task, INLINEFORM6 could be a follow-up question, the meaning of which depends on the conversation history. Accordingly, INLINEFORM7 in this task supports additional operations that copy previous turn INLINEFORM8 to the current turn. We use SequentialQA BIBREF9 for evaluation. In the third task, we change INLINEFORM9 to a large-scale knowledge-graph (i.e. Freebase) and consider knowledge-based question answering for single-turn questions. We use SimpleQuestions BIBREF10 as the testbed, where the INLINEFORM10 is in the form of a simple INLINEFORM11 -calculus like INLINEFORM12 , and the generation of INLINEFORM13 is equivalent to the prediction of the predicate and the subject entity.", "We study the problem in a low-resource setting. In the training process, we don't have annotated logical forms INLINEFORM0 or execution results INLINEFORM1 . Instead, we have a collection of natural language questions for the task, a limited number of simple mapping rules based on our prior knowledge about the task, and may also have a small amount of domain-independent word-level matching tables if necessary. These rules are not perfect, with low coverage, and can even be incorrect for some situations. For instance, when predicting a SQL command in the first task, we have a prior knowledge that (1) WHERE values potentially have co-occurring words with table cells; (2) the words “more” and “greater” tend to be mapped to WHERE operator “ INLINEFORM2 ”; (3) within a WHERE clause, header and cell should be in the same column; and (4) the word “average” tends to be mapped to aggregator “avg”. Similarly, when predicting a INLINEFORM3 -calculus in the third task, the entity name might be present in the question, and among all the predicates connected to the entity, the predicate with maximum number of co-occurred words might be correct. We would like to study to what extent our model can achieve if we use rules as the starting point." ], [ "We describe our approach for low-resource neural semantic parsing in this section.", "We propose to train a neural semantic parser using back-translation and meta-learning. The learning process is summarized in Algorithm FIGREF1 . We describe the three components in this section, namely back-translation, quality control, and meta-learning." ], [ "Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth." ], [ "Directly using generated datapoints as supervised training data is not desirable because those generated datapoints contain noises or errors. To address this, we follow the application of posterior regularization in neural machine translation BIBREF5 , and implement a dictionary-based discriminator which is used to measure the quality of a pseudo data. The basic idea is that although these generated datapoints are not perfect, the frequent patterns of the mapping between a phrase in INLINEFORM0 to a token in INLINEFORM1 are helpful in filtering out noise in the generated data with low frequency BIBREF6 . There are multiple ways to collect the phrase table information, such as using statistical phrase-level alignment algorithms like Giza++ or directly counting the co-occurrence of any question word and logical form token. We use the latter one in this work. Further details are described in the appendix." ], [ "A simple way to update the semantic parser is to merge the datapoints in hand and train a one-size-fits-all model BIBREF2 . However, this will hurt model's stability on examples covered by rules, and examples of the same task may vary widely BIBREF12 . Dealing with different types of examples requires the model to possess different abilities. For example, tackling examples uncovered by rules in WikiSQL requires the model to have the additional ability to map a column name to a totally different utterance, such as “country” to “nation”. Another simple solution is self-training BIBREF13 . One can train a model with examples covered by rules, and use the model as a teacher to make predictions on examples uncovered by rules and update the model on these predictions. However, self-training is somewhat tautological because the model is learned to make predictions which it already can produce.", "We learn the semantic parser with meta-learning, regarding learning from examples covered by rules or uncovered by rules as two (pseudo) tasks. Compared to the aforementioned strategies, the advantage of exploring meta-learning here is two-fold. First, we learn a specific model for each task, which provides guarantees about its stability on examples covered by rules. In the test phase, we can use the rule to detect which task an example belongs to, and use the corresponding task-specific model to make predictions. When dealing with examples covered by rules, we can either directly use rules to make predictions or use the updated model, depending on the accuracy of the learned model on the examples covered by rules on development set. Second, latent patterns of examples may vary widely in terms of whether or not they are covered by rules. Meta-learning is more desirable in this situation because it learns the model's ability to learn, improving model's versatility rather than mapping the latent patterns learned from datapoints in one distribution to datapoints in another distribution by force. Figure FIGREF1 is an illustration of data combination, self-training, and meta-learning.", "Meta-learning includes two optimizations: the learner that learns new tasks, and the meta-learner that trains the learner. In this work, the meta-learner is optimized by finding a good initialization that is highly adaptable. Specifically, we use model-agnostic meta-learning, MAML BIBREF7 , a powerful meta-learning algorithm with desirable properties including introducing no additional parameters and making no assumptions of the form of the model. In MAML, task-specific parameter INLINEFORM0 is initialized by INLINEFORM1 , and updated using gradient decent based on the loss function INLINEFORM2 of task INLINEFORM3 . In this work, the loss functions of two tasks are the same. The updated parameter INLINEFORM4 is then used to calculate the model's performance across tasks to update the parameter INLINEFORM5 . In this work, following the practical suggestions given by BIBREF17 , we update INLINEFORM6 in the inner-loop and regard the outputs of the quality controller as the input of both tasks.", "If we only have examples covered by rules, such as those used in the initialization phase, meta-learning learns to learn a good initial parameter that is evaluated by its usefulness on the examples from the same distribution. In the training phase, datapoints from both tasks are generated, and meta-learning learns to learn an initialization parameter which can be quickly and efficiently adapted to examples from both tasks." ], [ "We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and predicting subject-predicate pairs over a knowledge graph BIBREF10 . We describe task definition, base models, experiments settings and empirical results for each task, respectively." ], [ "Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer.", "We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%.", "We implement a neural network modular approach as the base model, which includes different modules to predict different SQL constituents. This approach is based on the understanding of the SQL grammar in WikiSQL, namely “SELECT $agg $column WHERE $column $op $value (AND $column $op $value)*”, where tokens starting with “$” are the slots to be predicted BIBREF18 . In practice, modular approaches typically achieve higher accuracy than end-to-end learning approach. Specifically, at the first step we implement a sequential labeling module to detect WHERE values and link them to table cells. Advantages of starting from WHERE values include that WHERE values are less ambiguous compared to other slots, and that the number of WHERE clauses can be naturally detected. After that, for each WHERE value, we use the preceding and following contexts in the question to predict its WHERE column and the WHERE operator through two unidirectional LSTM. Column attention BIBREF18 is used for predicting a particular column. Similar LSTM-based classifiers are used to predict SELECT column and SELECT aggregator.", "According to whether the training data can be processed by our rules, we divide it into two parts: rule covered part and rule uncovered part. For the rule covered part we could get rule covered training data using our rules. For the rule uncovered part we could also get training data using the trained Base model we have, we refer to these data as self-inference training data. Furthermore, we could get more training data by back translation, we refer to these data as question-generation training data. For all the settings, the Base Model is initialized with rule covered training data. In Base + Self Training Method, we finetune the Base model with self-inference training data. In Base + Question Generation Method, we use question-generation training data to finetune our model. In Base + BT Method, we use both self-inference and question-generation data to finetune our model. In Base + BT + QC, we add our quality controller. In Base + BT + QC + MAML, we further add meta-learning.", "Results are given in Table TABREF5 . We can see that back-translation, quality control and MAML incrementally improves the accuracy. Question generation is better than self-training here because the logical form in WikiSQL is relatively simple, so the distribution of the sampled logical forms is similar to the original one. In the back-translation setting, generated examples come from both self-training and the question generation model. The model performs better than rules on rule-covered examples, and improves the accuracy on uncovered examples. Figure FIGREF12 shows the learning curves of the COLUMN prediction model with or without using MAML. The model using MAML has a better starting point during training, which reflects the effectiveness of the pre-trained parameter." ], [ "We test our approach on question answering over another genre of environment: knowledge graph consisting of subject-relation-object triples.", "Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question.", "Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction.", "We follow BIBREF22 , and implement a KBQA pipeline consisting of three modules in this work. At the first step, we use a sequence labeling model, i.e. LSTM-CRF, to detect entity mention words in the question. After that, we use an entity linking model with BM25 built on Elasticsearch. Top-K ranked similar entities are retrieved as candidate list. Then, we get all the relations connected to entities in the candidate list as candidate relations, and use a relation prediction model, which is based on Match-LSTM BIBREF23 , to predict the relation. Finally, from all the entities connected to the predicted relation, we choose the one with highest BM25 score as the predicted entity. We use FB2M as the KB, which includes about 2 million triples.", "The settings are the same as those described in table-based semantic parsing.", "Results are given in Table TABREF10 , which are consistent with the numbers in WikiSQL. Using back-translation, quality control and MAML incrementally improves the accuracy, and our approach generalizes well to rule-uncovered examples." ], [ "We consider the task of conversational table-based semantic parsing in this part. Compared to single-turn table-based semantic parsing as described in subsection SECREF6 , the meaning of a natural language may also depends on questions of past turns, which is the common ellipsis and co-reference phenomena in conversational agents.", "Given a natural language question at the current turn, a web table, and previous turn questions in a conversation as the input, the task aims to generate a program (i.e. logical form), which can be executed on the table to obtain the correct answer of the current turn question.", "We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not.", "The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%.", "We implement a modular approach on top of a grammar of derivation rules (actions) as the base model. Similar to BIBREF9 , our grammar consists of predefined actions used for predicting SELECT column, WHERE column, WHERE operator, WHERE value, and determining whether it is required to copy the entire action sequence of the previous turn questions. After encoding a question and previous turn questions into vectors, we first use a controller module to predict an action sequence consisting of slots, and then use specific modules to predict the argument of each slot. Similar to BIBREF9 , we use a recurrent structure as the backbone of each module and use the softmax layer for making prediction.", "The settings are the same as those described in table-based semantic parsing.", "From Table TABREF20 , we can see that question generation does not work well on this task. This is because the difficulty in generating sequential questions and complex target logical forms. Applying MAML to examples not coming from question generation performs best. We leave contextual question generation as a future work." ], [ "We present an approach to learn neural semantic parser from simple domain-independent rules, instead of annotated logical forms or denotations. Our approach starts from examples covered by rules, which are used to initialize a semantic parser and a question generator in a back-translation paradigm. Generated examples are measured and filtered based on statistic analysis, and then used with model-agnostic meta-learning, which guarantees model's accuracy and stability on rule-covered examples, and acquires the versatility to generalize well on rule-uncovered examples. We conduct experiments on three datasets for table-based and knowledge-based question answering tasks. Results show that incorporating different strategies incrementally improves the performance. Our best model on WikiSQL achieves comparable accuracy to the system learned from denotation. In the future, we plan to focus on more complex logical forms. " ] ], "section_name": [ "Introduction", "Problem Statement", "Learning Algorithm", "Back-Translation", "Quality Controller", "Meta-Learning", "Experiment", "Table-Based Semantic Parsing", "Knowledge-Based Question Answering", "Conversational Table-Based Semantic Parsing", "Conclusion and Future Directions" ] }
{ "answers": [ { "annotation_id": [ "691175658de1ec8838a63a134a2b95d7b926bf32" ], "answer": [ { "evidence": [ "Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth." ], "extractive_spans": [ " applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5", "both models are improved following the back-translation protocol that target sequences should follow the real data distribution" ], "free_form_answer": "", "highlighted_evidence": [ "We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "5a377590899bad0ee77fa6a123e2a48f0727b2ae" ], "answer": [ { "evidence": [ "We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%.", "Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction.", "The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We describe our rules for WikiSQL here.", "Our rule for KBQA is simple without using a curated mapping dictionary.", "The pipeline of rules in SequentialQA is similar to that of WikiSQL." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "571e220f13b7ac8abc093ee95db41c0cae004aed" ], "answer": [ { "evidence": [ "We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%.", "Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction.", "The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%." ], "extractive_spans": [], "free_form_answer": "WikiSQL - 2 rules (SELECT, WHERE)\nSimpleQuestions - 1 rule\nSequentialQA - 3 rules (SELECT, WHERE, COPY)", "highlighted_evidence": [ "We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%.", "Our rule for KBQA is simple without using a curated mapping dictionary.", "The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "4a00a58f5c51855f1fbfbcbe5b00a4fac87ca88c" ], "answer": [ { "evidence": [ "Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer.", "Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question.", "We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not." ], "extractive_spans": [ "WikiSQL", "SimpleQuestions", "SequentialQA" ], "free_form_answer": "", "highlighted_evidence": [ "We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables.", "We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple.", "We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How is the back-translation model trained?", "Are the rules dataset specific?", "How many rules had to be defined?", "What datasets are used in this paper?" ], "question_id": [ "d7aed39c359fd381495b12996c4dfc1d3da38ed5", "9c423e3b44e3acc2d4b0606688d4ac9d6285ed0f", "b6fb72437e3779b0e523b9710e36b966c23a2a40", "e6469135e0273481cf11a6c737923630bc7ccfca" ], "question_writer": [ "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668" ], "search_query": [ "semantic parsing", "semantic parsing", "semantic parsing", "semantic parsing" ], "topic_background": [ "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1: An illustration of the difference between (a) data combination which learns a monolithic, one-size-fits-all model, (b) self-training which learns from predictions which the model produce and (c) meta-learning that reuse the acquired ability to learn.", "Table 1: Results on WikiSQL testset. BT stands for back-translation. QC stands for quality control.", "Table 2: Token-level dictionary for aggregators (upper group) and operators (lower group) in WikiSQL.", "Table 3: Results on SimpleQuestions testset. BT stands for back-translation. QC stands for quality control.", "Figure 2: Learning curve of the WHERE column prediction model on WikiSQL devset.", "Table 4: Results on SequentialQA testset. BT stands for back-translation. QC stands for quality control.", "Table 5: Token-level dictionary used for additional actions in SequentialQA." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Figure2-1.png", "6-Table4-1.png", "6-Table5-1.png" ] }
[ "How many rules had to be defined?" ]
[ [ "1909.05438-Table-Based Semantic Parsing-1", "1909.05438-Knowledge-Based Question Answering-2", "1909.05438-Conversational Table-Based Semantic Parsing-3" ] ]
[ "WikiSQL - 2 rules (SELECT, WHERE)\nSimpleQuestions - 1 rule\nSequentialQA - 3 rules (SELECT, WHERE, COPY)" ]
546
2003.08370
Distant Supervision and Noisy Label Learning for Low Resource Named Entity Recognition: A Study on Hausa and Yor\`ub\'a
The lack of labeled training data has limited the development of natural language processing tools, such as named entity recognition, for many languages spoken in developing countries. Techniques such as distant and weak supervision can be used to create labeled data in a (semi-) automatic way. Additionally, to alleviate some of the negative effects of the errors in automatic annotation, noise-handling methods can be integrated. Pretrained word embeddings are another key component of most neural named entity classifiers. With the advent of more complex contextual word embeddings, an interesting trade-off between model size and performance arises. While these techniques have been shown to work well in high-resource settings, we want to study how they perform in low-resource scenarios. In this work, we perform named entity recognition for Hausa and Yor\`ub\'a, two languages that are widely spoken in several developing countries. We evaluate different embedding approaches and show that distant supervision can be successfully leveraged in a realistic low-resource scenario where it can more than double a classifier's performance.
{ "paragraphs": [ [ "Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines, knowledge graphs and personal assistants. For high-resource languages like English, this is a well-studied problem with complex state-of-the-art systems reaching close to or above 90% F1-score on the standard datasets CoNLL03 BIBREF0 and Ontonotes BIBREF1. In recent years, research has been extended to a larger pool of languages including those of developing countries BIBREF2, BIBREF3, BIBREF4, BIBREF5. Often, for these languages (like Hausa and Yorùbá studied here), there exists a large population with access to digital devices and internet (and therefore digital text), but natural language processing (NLP) tools do not support them.", "One key reason is the absence of labeled training data required to train these systems. While manually labeled, gold-standard data is often only available in small quantities, it tends to be much easier to obtain large amounts of unlabeled text. Distant and weak supervision methods can then be used to create labeled data in a (semi-) automatic way. Using context BIBREF6, BIBREF7, external knowledge and resources BIBREF8, BIBREF9, expert rules BIBREF10, BIBREF11 or self-training BIBREF12, BIBREF13, a corpus or dataset can be labeled quickly and cheaply. Additionally, a variety of noise-handling methods have been proposed to circumvent the negative effects that errors in this automatic annotation might have on the performance of a machine learning classifier.", "In this work, we study two methods of distant supervision for NER: Automatic annotation rules and matching of lists of entities from an external knowledge source. While distant supervision has been successfully used for high resource languages, it is not straight forward that these also work in low-resource settings where the amount of available external information might be much lower. The knowledge graph of Wikidata e.g. contains 4 million person names in English while only 32 thousand such names are available in Yorùbá, many of which are Western names.", "Orthogonally to distant supervision, the pre-training of word embeddings is a key component for training many neural NLP models. A vector representation for words is built in an unsupervised fashion, i.e. on unlabeled text. Standard embedding techniques include Word2Vec BIBREF14, GloVe BIBREF15 and FastText BIBREF16. In the last two years, contextualized word embeddings have been proposed BIBREF17, BIBREF18, BIBREF19. At the cost of having a much larger model size, these vector representations take the context of words into account and have been shown to outperform other embeddings in many tasks. In this study, we evaluate both types of representations.", "The key questions we are interested in this paper are: How do NER models perform for Hausa and Yorùbá, two languages from developing countries? Are distant-supervision techniques relying on external information also useful in low-resource settings? How do simple and contextual word embeddings trade-off in model size and performance?" ], [ "Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering.", "Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \\), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks." ], [ "The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it.", "The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of \"O\".", "For the Yorùbá NER training, we make use of Yorùbá FastText embeddings BIBREF22 and multilingual-BERT that was trained on 104 languages including Yorùbá. Instead of the original FastText embeddings BIBREF16, we chose FastText embeddings trained on a multi-domain and high-quality dataset BIBREF22 because it gave better word similarity scores." ], [ "In this work, we rely on two sources of distant supervision chosen for its ease of application:", "Rules allow to apply the knowledge of domain experts without the manual effort of labeling each instance. They are especially suited for entities that follow specific patterns, like time phrases in text (see also BIBREF23). We use them for the DATE entity. In Yoruba, date expressions are written with the keywords of “ọj” (day), “oṣù” (month), and “ọdn” (year). Similarly, time expressions are written with keywords such as “wákàtí” (hour), “ìṣjú (minute) and “ìṣjú-aaya (seconds). Relative date and time expressions are also written with keywords “ḷodn” (in the year), “loṣù” (in the month), “lọs” (in the week), “lọj” (in the day). An example of a date expression is:", "“8th of December, 2018” in Yorùbá translates to “ọj 8 oṣù Ọp, ọdún 2018”", "Lists of Entities can be obtained from a variety of sources like gazetteers, dictionaries, phone books, census data and Wikipedia categories BIBREF24. In recent years, knowledge bases like Freebase and Wikidata have become another option to retrieve entity lists in a structured way. An entity list is created by extracting all names of that type from a knowledge source (e.g. all person names from Wikidata). If a word or token from the unlabeled text matches an entry in an entity list, it is assigned the corresponding label. Experts can add heuristics to this automatic labeling that improve the matching BIBREF25. These include e.g. normalizing the grammatical form of words or filtering common false positives.", "Another popular method for low-resource NER is the use of cross-lingual information BIBREF26. Alternatives to distant supervision are crowd-sourcing BIBREF27 and non-expert annotations BIBREF28." ], [ "The labels obtained through distant and weak supervision methods tend to contain a high amount of errors. In the Food101N dataset BIBREF29 around 20% of the automatically obtained labels are incorrect while for Clothing1M BIBREF30 the noise rate is more than 60%. Learning with this additional, noisily labeled data can result in lower classification performance compared to just training on a small set of clean labels (cf. e.g. BIBREF31). A variety of techniques have been proposed to handle label noise like modelling the underlying noise process BIBREF32 and filtering noisy instances BIBREF33, BIBREF34. BIBREF35 gives an in-depth introduction into this field and BIBREF36 survey more recent approaches, focusing on the vision domain.", "In this work, we experiment with three noise handling techniques. The approach by BIBREF37 estimates a noise channel using the EM algorithm. It treats all labels as possibly noisy and does not distinguish between a clean and a noisy part of the data. In contrast, the method by BIBREF38 leverages the existence of a small set of gold standard labels, something that - in our experience - is often available even in low resource settings. Having such a small set of clean labels is beneficial both for the main model itself as well as for the noise handling technique. Both approaches model the relationship between clean and noisy labels using a confusion matrix. This allows adapting the noisy to the clean label distribution during training. For a setting with 5 labels, it only requires $5^2=25$ additional parameters to estimate which could be beneficial when only few training data is available. The technique by BIBREF39 (adapted to NER by BIBREF38) learns a more complex neural network to clean the noisy labels before training with them. It also takes the features into account when cleaning the noise and it might, therefore, be able to model more complex noise processes. All three techniques can be easily added to the existing standard neural network architectures for NER." ], [ "Hausa Distant supervision on Hausa was performed using lists of person names extracted from Wikipedia data. Since we had limited access to the data, we tested a simplified binary NER-tagging setting (PERSON-tags only). As a base model, we used a Bi-LSTM model developed for Part-of-Speech tagging BIBREF40. For noise handling, we apply the Noise Channel model by BIBREF37.", "Yorùbá For Yorùbá, the entity lists were created by extracting person, location and organization entities from Wikidata in English and Yorùbá. Additionally, a list of person names in Nigeria was obtained from a Yorùbá Name website (8,365 names) and list of popular Hausa, Igbo, Fulani and Yorùbá people on Wikipedia (in total 9,241 names). As manual heuristic, a minimum name length of 2 was set for extraction of PER (except for Nigerian names), LOC and ORG. The Nigerian names were set to include names with a minimum length of 3. For the DATE label, a native Yorùbá speaker wrote some annotation rules using 11 “date keywords” (“ọj”, “ọs”, “os”, “ọdn”, “wákàtí” , “ḷodn”, “ḷodn-un”, “ọdn-un” “lọs” , “lọj”, “ aago”) following these two criteria: (1) A token is a date keyword or follows a date keyword in a sequence. (2) A token is a digit. For Yorùbá, we evaluate four settings with different amounts of clean data, namely 1k, 2k, 4k and the full dataset. As distantly supervised data with noisy labels, the full dataset is used. Additionally, 19,559 words from 18 articles of the Global News Corpus (different from the articles in the training corpus) were automatically annotated.", "The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error.", "The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section." ], [ "The results for Hausa are given in Table TABREF14. Training with a mix of 50% clean and 50% distantly-supervised data performs 15 F1-score points below using the whole 100% clean data which is to be expected due to the lower quality of the distantly-supervised labels. Using the Noise Channel closes half of this gap. Due to the limited availability of the dataset, we could unfortunately not investigate this further, but it shows already the benefits that are possible through noise-handling.", "An evaluation of the distant supervision for Yorùbá is given in Table TABREF14. The quality of the automatically annotated labels differs between the classes. Locations perform better than person and organization names, probably due to locations being less diverse and better covered in Wikidata. With simple date rules, we obtain already a 48% F1-score. This shows the importance of leveraging the knowledge of native speakers in automatic annotations. Overall a decent annotation can be obtained by the distant supervision and it even outperforms some of the actual machine learning models in the low-resource setting. Table TABREF14 compares using only Wikidata as data source versus adding additional, manually obtained lists of person names. While adding a list of Yorùbá names only improves recall slightly, the integration of Nigerian names helps to boost recall by 13 points.", "The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token.", "The Bi-LSTM model has 1.50 million parameters (1.53 million for the cleaning model), while BERT has 110 million parameters. There is a clear trade-off between model size and performance. The BERT model is 70 times larger and obtains consistently better results due to its more complex, contextual embeddings pretrained on more data. Still, the F1-score also drops nearly half for the BERT model in the 1k setting compared to the full dataset. For 1k and 2k labeled data, the distant supervision helps to improve the model's performance. However, once the model trained only on clean data reaches a higher F1-score than the distant supervision technique, the model trained on clean and distantly-supervised data deteriorates. This suggests that the BERT model overfits too much on the noise in the distant supervision." ], [ "In this study, we analysed distant supervision techniques and label-noise handling for NER in Hausa and Yorùbá, two languages from developing countries. We showed that they can be successfully leveraged in a realistic low-resource scenario to double a classifier's performance. If model size is not a constraint, the more complex BERT model clearly outperforms the smaller Bi-LSTM architecture. Nevertheless, there is still a large gap between the best performing model on Yorùbá with 66 F1-score and the state-of-the-art in English around 90.", "We see several interesting follow-ups to these evaluations. In the future, we want to evaluate if noise handling methods can also allow the more complex BERT model to benefit from distant supervision. Regarding the model complexity, it would be interesting to experiment with more compact models like DistilBERT BIBREF42 that reach a similar performance with a smaller model size for high-resource settings. In general, we want to investigate more in-depth the trade-offs between model complexity and trainability in low-resource scenarios." ], [ "The experiments on Hausa were possible thanks to the collaboration with Florian Metze and CMU as part of the LORELEI project. Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 232722074 – SFB 1102 / Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 1102, the EU-funded Horizon 2020 projects ROXANNE under grant agreement No. 833635 and COMPRISE (http://www.compriseh2020.eu/) under grant agreement No. 3081705." ] ], "section_name": [ "Introduction", "Background & Methods ::: Languages", "Background & Methods ::: Datasets & Embeddings", "Background & Methods ::: Distant and Weak Supervision", "Background & Methods ::: Learning With Noisy Labels", "Models & Experimental Settings", "Results", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "4a0e4c2d0a6c8bd5658f150baf05c1b5bd17ae7a" ], "answer": [ { "evidence": [ "The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it.", "The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of \"O\"." ], "extractive_spans": [ "10k training and 1k test", "1,101 sentences (26k tokens)" ], "free_form_answer": "", "highlighted_evidence": [ "The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances.", "The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "936ebebd7607327f248102de80e8ff8cda73f44b" ], "answer": [ { "evidence": [ "The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token.", "FLOAT SELECTED: Figure 1: F1-scores and standard error for Yorùbá." ], "extractive_spans": [], "free_form_answer": "Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision)\nBERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)", "highlighted_evidence": [ "The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token.", "FLOAT SELECTED: Figure 1: F1-scores and standard error for Yorùbá." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "ef8e3db2393509b1826c8860b3ae186d367c74b3" ], "answer": [ { "evidence": [ "The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error.", "The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section." ], "extractive_spans": [ "Bi-LSTM", "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions.", "The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "d60d6aee12c9116989f1f6b01103cacc0c9a7538" ], "answer": [ { "evidence": [ "Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering.", "Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \\), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks." ], "extractive_spans": [ "Nigeria", "Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan", "Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil" ], "free_form_answer": "", "highlighted_evidence": [ "Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan.", "Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How much labeled data is available for these two languages?", "What was performance of classifiers before/after using distant supervision?", "What classifiers were used in experiments?", "In which countries are Hausa and Yor\\`ub\\'a spoken?" ], "question_id": [ "06202ab8b28dcf3991523cf163b8844b42b9fc99", "271019168ed3a2b0ef5e3780b48a1ebefc562b57", "288613077787159e512e46b79190c91cd4e5b04d", "cf74ff49dfcdda2cd67a896b4b982a1c3ee51531" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: F1-scores and standard error for Yorùbá." ], "file": [ "5-Figure1-1.png" ] }
[ "What was performance of classifiers before/after using distant supervision?" ]
[ [ "2003.08370-5-Figure1-1.png", "2003.08370-Results-2" ] ]
[ "Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision)\nBERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)" ]
547
1909.00361
Cross-Lingual Machine Reading Comprehension
Though the community has made great progress on Machine Reading Comprehension (MRC) task, most of the previous works are solving English-based MRC problems, and there are few efforts on other languages mainly due to the lack of large-scale training data. In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English. Firstly, we present several back-translation approaches for CLMRC task, which is straightforward to adopt. However, to accurately align the answer into another language is difficult and could introduce additional noise. In this context, we propose a novel model called Dual BERT, which takes advantage of the large-scale training data provided by rich-resource language (such as English) and learn the semantic relations between the passage and question in a bilingual context, and then utilize the learned knowledge to improve reading comprehension performance of low-resource language. We conduct experiments on two Chinese machine reading comprehension datasets CMRC 2018 and DRCD. The results show consistent and significant improvements over various state-of-the-art systems by a large margin, which demonstrate the potentials in CLMRC task. Resources available: this https URL
{ "paragraphs": [ [ "Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and massive progresses have been made in creating large-scale datasets and neural models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Though various types of contributions had been made, most works are dealing with English reading comprehension. Reading comprehension in other than English has not been well-addressed mainly due to the lack of large-scale training data.", "To enrich the training data, there are two traditional approaches. Firstly, we can annotate data by human experts, which is ideal and high-quality, while it is time-consuming and rather expensive. One can also obtain large-scale automatically generated data BIBREF0, BIBREF1, BIBREF6, but the quality is far beyond the usable threshold. Another way is to exploit cross-lingual approaches to utilize the data in rich-resource language to implicitly learn the relations between $<$passage, question, answer$>$.", "In this paper, we propose the Cross-Lingual Machine Reading Comprehension (CLMRC) task that aims to help reading comprehension in low-resource languages. First, we present several back-translation approaches when there is no or partially available resources in the target language. Then we propose a novel model called Dual BERT to further improve the system performance when there is training data available in the target language. We first translate target language training data into English to form pseudo bilingual parallel data. Then we use multilingual BERT BIBREF7 to simultaneously model the $<$passage, question, answer$>$ in both languages, and fuse the representations of both to generate final predictions. Experimental results on two Chinese reading comprehension dataset CMRC 2018 BIBREF8 and DRCD BIBREF9 show that by utilizing English resources could substantially improve system performance and the proposed Dual BERT achieves state-of-the-art performances on both datasets, and even surpass human performance on some metrics. Also, we conduct experiments on the Japanese and French SQuAD BIBREF10 and achieves substantial improvements. Moreover, detailed ablations and analysis are carried out to demonstrate the effectiveness of exploiting knowledge from rich-resource language. To best of our knowledge, this is the first time that the cross-lingual approaches applied and evaluated on realistic reading comprehension data. The main contributions of our paper can be concluded as follows.", "[leftmargin=*]", "We present several back-translation based reading comprehension approaches and yield state-of-the-art performances on several reading comprehension datasets, including Chinese, Japanese, and French.", "We propose a model called Dual BERT to simultaneously model the $<$passage, question$>$ in both source and target language to enrich the text representations.", "Experimental results on two public Chinese reading comprehension datasets show that the proposed cross-lingual approaches yield significant improvements over various baseline systems and set new state-of-the-art performances." ], [ "Machine Reading Comprehension (MRC) has been a trending research topic in recent years. Among various types of MRC tasks, span-extraction reading comprehension has been enormously popular (such as SQuAD BIBREF4), and we have seen a great progress on related neural network approaches BIBREF11, BIBREF12, BIBREF13, BIBREF3, BIBREF14, especially those were built on pre-trained language models, such as BERT BIBREF7. While massive achievements have been made by the community, reading comprehension in other than English has not been well-studied mainly due to the lack of large-scale training data.", "BIBREF10 proposed to use runtime machine translation for multilingual extractive reading comprehension. They first translate the data from the target language to English and then obtain an answer using an English reading comprehension model. Finally, they recover the corresponding answer in the original language using soft-alignment attention scores from the NMT model. However, though an interesting attempt has been made, the zero-shot results are quite low, and alignments between different languages, especially for those have different word orders, are significantly different. Also, they only evaluate on a rather small dataset (hundreds of samples) that was translated from SQuAD BIBREF4, which is not that realistic.", "To solve the issues above and better exploit large-scale rich-resourced reading comprehension data, in this paper, we propose several zero-shot approaches which yield state-of-the-art performances on Japanese and French SQuAD data. Moreover, we also propose a supervised approach for the condition that there are training samples available for the target language. To evaluate the effectiveness of our approach, we carried out experiments on two realistic public Chinese reading comprehension data: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. Experimental results demonstrate the effectiveness by modeling training samples in a bilingual environment." ], [ "In this section, we illustrate back-translation approaches for cross-lingual machine reading comprehension, which is natural and easy to implement.", "Before introducing these approaches in detail, we will clarify crucial terminologies in this paper for better understanding.", "[leftmargin=*]", "Source Language: Rich-resourced and has sufficient large-scale training data that we aim to extract knowledge from. We use subscript S for variables in the source language.", "Target Language: Low-resourced and has only a few training data that we wish to optimize on. We use subscript T for variables in the target language.", "In this paper, we aim to improve the machine reading comprehension performance in Chinese (target language) by introducing English (source language) resources. The general idea of back-translation approaches is to translate $<$passage, question$>$ pair into the source language and generate an answer using a reading comprehension system in the source language. Finally, the generated answer is back-translated into the target language. In the following subsections, we will introduce several back-translation approaches for cross-lingual machine reading comprehension task. The architectures of the proposed back-translation approaches are depicted in Figure FIGREF5." ], [ "To build a simple cross-lingual machine reading comprehension system, it is straightforward to utilize translation system to bridge source and target language BIBREF10. Briefly, we first translate the target sample to the source language. Then we use a source reading comprehension system, such as BERT BIBREF7, to generate an answer in the source language. Finally, we use back-translation to get the answer in the target language. As we do not exploit any training data in the target language, we could regard this approach as a zero-shot cross-lingual baseline system.", "Specifically, we use Google Neural Machine Translation (GNMT) system for source-to-target and target-to-source translations. One may also use advanced and domain-specific neural machine translation system to achieve better translation performance, while we leave it for individuals, and this is beyond the scope of this paper.", "However, for span-extraction reading comprehension task, a major drawback of this approach is that the translated answer may not be the exact span in the target passage. To remedy this, we propose three simple approaches to improve the quality of the translated answer in the target language." ], [ "We propose a simple approach to align the translated answer into extract span in the target passage. We calculate character-level text overlap (for Chinese) between translated answer $A_{trans}$ and arbitrary sliding window in target passage $\\mathcal {P}_{T[i:j]}$. The length of sliding window ranges $len(A_{trans}) \\pm \\delta $, with a relax parameter $\\delta $. Typically, the relax parameter $\\delta \\in [0,5]$ as the length between ground truth and translated answer does not differ much in length. In this way, we would calculate character-level F1-score of each candidate span $\\mathcal {P}_{T[i:j]}$ and translated answer $A_{trans}$, and we could choose the best matching one accordingly. Using the proposed SimpleMatch could ensure the predicted answer is an exact span in target passage. As SimpleMatch does not use target training data either, it could also be a pipeline component in zero-shot settings." ], [ "Though we could use unsupervised approaches for aligning answer, such as the proposed SimpleMatch, it stops at token-level and lacks semantic awareness between the translated answer and ground truth answer. In this paper, we also propose two supervised approaches for further improving the answer span when there is training data available in the target language.", "The first one is Answer Aligner, where we feed translated answer $\\mathcal {A}_{trans}$ and target passage $\\mathcal {P}_{T}$ into the BERT and outputs the ground truth answer span $\\mathcal {A}_{T}$. The model will learn the semantic relations between them and generate improved span for the target language." ], [ "In Answer Aligner, we did not exploit question information in target training data. One can also utilize question information to transform Answer Aligner into Answer Verifier, as we use complete $\\langle \\mathcal {P}_T, \\mathcal {Q}_T, \\mathcal {A}_T \\rangle $ in the target language and additional translated answer $\\mathcal {A}_{trans}$ to verify its correctness and generate improved span." ], [ "One disadvantage of the back-translation approaches is that we have to recover the source answer into the target language. To remedy the issue, in this paper, we propose a novel model called Dual BERT to simultaneously model the training data in both source and target language to better exploit the relations among $<$passage, question, answer$>$. The model could be used when there is training data available for the target language, and we could better utilize source language data to enhance the target reading comprehension system. The overall neural architecture for Dual BERT is shown in Figure FIGREF11." ], [ "Bidirectional Encoder Representation from Transformers (BERT) has shown marvelous performance in various NLP tasks, which substantially outperforms non-pretrained models by a large margin BIBREF7. In this paper, we use multi-lingual BERT for better encoding the text in both source and target language. Formally, given target passage $\\mathcal {P}_{T}$ and question $\\mathcal {Q}_{T}$, we organize the input $X_{T}$ for BERT as follows.", "[CLS] ${\\mathcal {Q}_{T}}$ [SEP] ${\\mathcal {P}_{T}}$ [SEP]", "Similarly, we can also obtain source training sample by translating target sample with GNMT, forming input $X_{S}$ for BERT. Then we use $X_{T}$ and $X_{S}$ to obtain deep contextualized representations through a shared multi-lingual BERT, forming $B_{T}\\in \\mathbb {R}^{L_T*h}, B_{S}\\in \\mathbb {R}^{L_S*h}$, where $L$ represents the length of input and $h$ is the hidden size (768 for multi-lingual BERT)." ], [ "Typically, in the reading comprehension task, attention mechanism is used to measure the relations between the passage and question. Moreover, as Transformers are fundamental components of BERT, multi-head self-attention layer BIBREF15 is used to extract useful information within the input sequence.", "Specifically, in our model, to enhance the target representation, we use a multi-head self-attention layer to extract useful information in source BERT representation $B_{S}$. We aim to generate target span by not only relying on target representation but also on source representation to simultaneously consider the $<$passage, question$>$ relations in both languages, which can be seen as a bilingual decoding process.", "Briefly, we regard target BERT representation $B_T$ as query and source BERT representation $B_S$ as key and value in multi-head attention mechanism. In original multi-head attention, we calculate a raw dot attention as follows. This will result in an attention matrix $A_{TS}$ that indicate raw relations between each source and target token.", "To combine the benefit of both inter-attention and self-attention, instead of using Equation 1, we propose a simple modification on multi-head attention mechanism, which is called Self-Adaptive Attention (SAA). First, we calculate self-attention of $B_T$ and $B_S$ and apply the softmax function, as shown in Equation 2 and 3. This is designed to use self-attention to filter the irrelevant part within each representation firstly, and inform the raw dot attention on paying more attention to the self-attended part, making the attention more precise and accurate.", "Then we use self-attention $A_T$ and $A_S$, inter-attention $A_{TS}$ to get self-attentive attention $\\tilde{A}_{TS}$. We calculate dot product between ${A}_{ST}$ and $B_S$ to obtain attended representation $R^{\\prime } \\in \\mathbb {R}^{L_T*h}$.", "After obtaining attended representation $R^{\\prime }$, we use an additional fully connected layer with residual layer normalization which is similar to BERT implementation.", "Finally, we calculate weighted sum of $H_{T}$ to get final span prediction $P_{T}^{s}, P_{T}^{e}$ (superscript $s$ for start, $e$ for end). For example, the start position $P_{T}^{s}$ is calculated by the following equation.", "We calculate standard cross entropy loss for the start and end predictions in the target language." ], [ "In order to evaluate how translated sample behaves in the source language system, we also generate span prediction for source language using BERT representation $B_S$ directly without further calculation, resulting in the start and target prediction $P_{S}^{s}, P_{S}^{e}$ (similar to Equation 8). Moreover, we also calculate cross-entropy loss $\\mathcal {L}_{aux}$ for translated sample (similar to Equation 9), where a $\\lambda $ parameter is applied to this loss.", "Instead of setting $\\lambda $ with heuristic value, in this paper, we propose a novel approach to better adjust $\\lambda $ automatically. As the sample was generated by the machine translation system, there would be information loss during the translation process. Wrong or partially translated samples may harm the performance of reading comprehension system. To measure how the translated samples assemble the real target samples, we calculate cosine similarity between the ground truth span representation in source and target language (denoted as $\\tilde{H}_{S}$ and $\\tilde{H}_{T}$). When the ground truth span representation in the translated sample is similar to the real target samples, the $\\lambda $ increase; otherwise, we only use target span loss as $\\lambda $ may decrease to zero.", "The span representation is the concatenation of three parts: BERT representation of ground truth start $B^s \\in \\mathbb {R}^{h} $, ground truth end $B^e \\in \\mathbb {R}^{h}$, and self-attended span $B^{att} \\in \\mathbb {R}^{h}$, which considers both boundary information (start/end) and mixed representation of the whole ground truth span. We use BERT representation $B$ to get a self-attended span representation $B^{att}$ using a simple dot product with average pooling, to get a 2D-tensor.", "The overall loss for Dual BERT is composed by two parts: target span loss $\\mathcal {L}_{T}$ and auxiliary span loss in source language $\\mathcal {L}_{aux}$." ], [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29.", "Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail.", "[leftmargin=*]", "Tokenization: Following the official BERT implementation, we use WordPiece tokenizer BIBREF16 for English and character-level tokenizer for Chinese.", "BERT: We use pre-trained English BERT on SQuAD 1.1 BIBREF4 for initialization, denoted as SQ-$B_{en}$ (base) and SQ-$L_{en}$ (large) for back-translation approaches. For other conditions, we use multi-lingual BERT as default, denoted as $B_{mul}$ (and SQ-$B_{mul}$ for those were pre-trained on SQuAD).", "Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance.", "Optimization: Following original BERT implementation, we use Adam with weight decay optimizer BIBREF18 using an initial learning rate of 4e-5 and use cosine learning rate decay scheme instead of the original linear decay, which we found it beneficial for stabilizing results. The training batch size is set to 64, and each model is trained for 2 epochs, which roughly takes 1 hour.", "Implementation: We modified the TensorFlow BIBREF19 version run_squad.py provided by BERT. All models are trained on Cloud TPU v2 that has 64GB HBM." ], [ "The overall results are shown in Table TABREF37. As we can see that, without using any alignment approach, the zero-shot results are quite lower regardless of using English BERT-base (#1) or BERT-large (#2). When we apply SimpleMatch (#3), we observe significant improvements demonstrating its effectiveness. The Answer Aligner (#4) could further improve the performance beyond SimpleMatch approach, demonstrating that the machine learning approach could dynamically adjust the span output by learning the semantic relationship between translated answer and target passage. Also, the Answer Verifier (#5) could further boost performance and surpass the multi-lingual BERT baseline (#7) that only use target training data, demonstrating that it is beneficial to adopt rich-resourced language to improve machine reading comprehension in other languages.", "When we do not use SQuAD pre-trained weights, the proposed Dual BERT (#8) yields significant improvements (all results are verified by p-test with $p<0.05$) over both Chinese BERT (#6) and multi-lingual BERT (#7) by a large margin. If we only train the BERT with SQuAD (#9), which is a zero-shot system, we can see that it achieves decent performance on two Chinese reading comprehension data. Moreover, we can also pursue further improvements by continue training (#10) with Chinese data starting from the system #9, or mixing Chinese data with SQuAD and training from initial multi-lingual BERT (#11). Under powerful SQuAD pre-trained baselines, Dual BERT (#12) still gives moderate and consistent improvements over Cascade Training (#10) and Mixed Training (#11) baselines and set new state-of-the-art performances on both datasets, demonstrating the effectiveness of using machine-translated sample to enhance the Chinese reading comprehension performance." ], [ "In this paper, we propose a simple but effective approach called SimpleMatch to align translated answer to original passage span. While one may argue that using neural machine translation attention to project source answer to original target passage span is ideal as used in BIBREF10. However, to extract attention value in neural machine translation system and apply it to extract the original passage span is bothersome and computationally ineffective. To demonstrate the effectiveness of using SimpleMatch instead of using NMT attention to extract original passage span in zero-shot condition, we applied SimpleMatch to Japanese and French SQuAD (304 samples for each) which is what exactly used in BIBREF10. The results are listed in Table TABREF40.", "From the results, we can see that, though our baseline (GNMT+BERT$_{L_{en}}$) is higher than previous work (Back-Translation BIBREF10), when using SimpleMatch to extract original passage span could obtain competitive of even larger improvements. In Japanese SQuAD, the F1 score improved by 9.6 in BIBREF10 using NMT attention, while we obtain larger improvement with 11.8 points demonstrating the effectiveness of the proposed method. BERT with pre-trained SQuAD weights yields the best performance among these systems, as it does not require the machine translation process and has unified text representations for different languages." ], [ "In this section, we ablate important components in our model to explicitly demonstrate its effectiveness. The ablation results are depicted in Table TABREF42.", "As we can see that, removing SQuAD pre-trained weights (i.e., using randomly initialized BERT) hurts the performance most, suggesting that it is beneficial to use pre-trained weights though the source and the target language is different. Removing source BERT will degenerate to cascade training, and the results show that it also harms overall performance, demonstrating that it is beneficial to utilize translated sample for better characterizing the relations between $<$passage, question, answer$>$. The other modifications seem to also consistently decrease the performance to some extent, but not as salient as the data-related components (last two lines), indicating that data-related approaches are important in cross-lingual machine reading comprehension task." ], [ "In our preliminary cross-lingual experiments, we adopt English as our source language data. However, one question remains unclear.", "Is it better to pre-train with larger data in a distant language (such as English, as oppose to Simplified Chinese), or with smaller data in closer language (such as Traditional Chinese)?", "To investigate the problem, we plot the multi-lingual BERT performance on the CMRC 2018 development data using different language and data size in the pre-training stage. The results are depicted in Figure FIGREF43, and we come to several observations.", "Firstly, when the size of pre-training data is under 25k (training data size of DRCD), we can see that there is no much difference whether we use Chinese or English data for pre-training, and even the English pre-trained models are better than Chinese pre-trained models in most of the times, which is not expected. We suspect that, by using multi-lingual BERT, the model tend to provide universal representations for the text and learn the language-independent semantic relations among the inputs which is ideal for cross-lingual tasks, thus the model is not that sensitive to the language in the pre-training stage. Also, as training data size of SQuAD is larger than DRCD, we could use more data for pre-training. When we add more SQuAD data ($>$25k) in the pre-training stage, the performance on the downstream task (CMRC 2018) continues to improve significantly. In this context, we conclude that,", "When the pre-training data is not abundant, there is no special preference on the selection of source (pre-training) language.", "If there are large-scale training data available for several languages, we should select the source language as the one that has the largest training data rather than its linguistic similarity to the target language.", "Furthermore, one could also take advantages of data in various languages, but not only in a bilingual environment, to further exploit knowledge from various sources, which is beyond the scope of this paper and we leave this for future work." ], [ "In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task. When there is no training data available for the target language, firstly, we provide several zero-shot approaches that were initially trained on English and transfer to other languages, along with three methods to improve the translated answer span by using unsupervised and supervised approaches. When there is training data available for the target language, we propose a novel model called Dual BERT to simultaneously model the $<$passage, question, answer$>$ in source and target languages using multi-lingual BERT. The proposed method takes advantage of the large-scale training data by rich-resource language (such as SQuAD) and learns the semantic relations between the passage and question in both source and target language. Experiments on two Chinese machine reading comprehension datasets indicate that the proposed model could give consistent and significant improvements over various state-of-the-art systems by a large margin and set baselines for future research on CLMRC task.", "Future studies on cross-lingual machine reading comprehension will focus on 1) how to utilize various types of English reading comprehension data; 2) cross-lingual machine reading comprehension without the translation process, etc." ], [ "We would like to thank all anonymous reviewers for their thorough reviewing and providing constructive comments to improve our paper. The first author was partially supported by the Google TensorFlow Research Cloud (TFRC) program for Cloud TPU access. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011, and 61772153." ] ], "section_name": [ "Introduction", "Related Works", "Back-Translation Approaches", "Back-Translation Approaches ::: GNMT", "Back-Translation Approaches ::: Simple Match", "Back-Translation Approaches ::: Answer Aligner", "Back-Translation Approaches ::: Answer Verifier", "Dual BERT", "Dual BERT ::: Dual Encoder", "Dual BERT ::: Bilingual Decoder", "Dual BERT ::: Auxiliary Output", "Experiments ::: Experimental Setups", "Experiments ::: Overall Results", "Experiments ::: Results on Japanese and French SQuAD", "Experiments ::: Ablation Studies", "Discussion", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "766f8876da0a213c6760a3239ec9afff1d8d5940" ], "answer": [ { "evidence": [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29.", "Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail.", "Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance.", "FLOAT SELECTED: Table 1: Statistics of CMRC 2018 and DRCD." ], "extractive_spans": [], "free_form_answer": "Evaluation datasets used:\nCMRC 2018 - 18939 questions, 10 answers\nDRCD - 33953 questions, 5 answers\nNIST MT02/03/04/05/06/08 Chinese-English - Not specified\n\nSource language train data:\nSQuAD - Not specified", "highlighted_evidence": [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29.", "The resource in source language was chosen as SQuAD BIBREF4 training data.", "We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance.", "FLOAT SELECTED: Table 1: Statistics of CMRC 2018 and DRCD." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "4a7735f60bea4856ed4cdc933f0b15be02e455bb" ], "answer": [ { "evidence": [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "827f01fb157690ba96602f8087a1453aa3cdb16c" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "How big are the datasets used?", "Is this a span-based (extractive) QA task?", "Are the contexts in a language different from the questions?" ], "question_id": [ "3fb4334e5a4702acd44bd24eb1831bb7e9b98d31", "a9acd1af4a869c17b95ec489cdb1ba7d76715ea4", "afa94772fca7978f30973c43274ed826c40369eb" ], "question_writer": [ "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668" ], "search_query": [ "retrieval reading comprehension", "retrieval reading comprehension", "retrieval reading comprehension" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Back-translation approaches for cross-lingual machine reading comprehension (Left: GNMT, Middle: Answer Aligner, Right: Answer Verifier)", "Figure 2: System overview of the Dual BERT model for cross-lingual machine reading comprehension task.", "Table 1: Statistics of CMRC 2018 and DRCD.", "Table 2: Experimental results on CMRC 2018 and DRCD. † indicates unpublished works (some of the systems are using development set for training, which makes the results not directly comparable.). ♠ indicates zero-shot approach. We mark our system with an ID in the first column for reference simplicity.", "Table 3: Zero-shot cross-lingual machine reading comprehension results on Japanese and French SQuAD data. † are extracted in Asai et al. (2018).", "Table 4: Ablations of Dual BERT on the CMRC 2018 development set.", "Figure 3: BERT performance (average of EM and F1) with different amount of pre-training SQuAD (English) or DRCD (Traditional Chinese)." ], "file": [ "3-Figure1-1.png", "5-Figure2-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Figure3-1.png" ] }
[ "How big are the datasets used?" ]
[ [ "1909.00361-Experiments ::: Experimental Setups-5", "1909.00361-6-Table1-1.png", "1909.00361-Experiments ::: Experimental Setups-1", "1909.00361-Experiments ::: Experimental Setups-0" ] ]
[ "Evaluation datasets used:\nCMRC 2018 - 18939 questions, 10 answers\nDRCD - 33953 questions, 5 answers\nNIST MT02/03/04/05/06/08 Chinese-English - Not specified\n\nSource language train data:\nSQuAD - Not specified" ]
549
1908.11546
Modeling Multi-Action Policy for Task-Oriented Dialogues
Dialogue management (DM) plays a key role in the quality of the interaction with the user in a task-oriented dialogue system. In most existing approaches, the agent predicts only one DM policy action per turn. This significantly limits the expressive power of the conversational agent and introduces unwanted turns of interactions that may challenge users' patience. Longer conversations also lead to more errors and the system needs to be more robust to handle them. In this paper, we compare the performance of several models on the task of predicting multiple acts for each turn. A novel policy model is proposed based on a recurrent cell called gated Continue-Act-Slots (gCAS) that overcomes the limitations of the existing models. Experimental results show that gCAS outperforms other approaches. The code is available at this https URL
{ "paragraphs": [ [ "In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and the agent. Both supervised learning (SL) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 and reinforcement learning (RL) approaches BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 have been adopted to learn policies. SL learns a policy to predict acts given the dialogue state. Recent work BIBREF10, BIBREF11 also used SL as pre-training for RL to mitigate the sample inefficiency of RL approaches and to reduce the number of interactions. Sequence2Sequence (Seq2Seq) BIBREF12 approaches have also been adopted in user simulators to produce user acts BIBREF13. These approaches typically assume that the agent can only produce one act per turn through classification. Generating only one act per turn significantly limits what an agent can do in a turn and leads to lengthy dialogues, making tracking of state and context throughout the dialogue harder. An example in Table TABREF3 shows how the agent can produce both an inform and a multiple_choice act, reducing the need for additional turns. The use of multiple actions has previously been used in interaction managers that keep track of the floor (who is speaking right now) BIBREF14, BIBREF15, BIBREF16, but the option of generating multiple acts simultaneously at each turn for dialogue policy has been largely ignored, and only explored in simulated scenarios without real data BIBREF17.", "This task can be cast as a multi-label classification problem (if the sequential dependency among the acts is ignored) or as a sequence generation one as shown in Table TABREF4.", "In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\\textit {continue}, \\textit {act}, \\textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them." ], [ "The proposed policy network adopts an encoder-decoder architecture (Figure FIGREF5). The input to the encoder is the current-turn dialogue state, which follows BIBREF19's definition. It contains policy actions from the previous turn, user dialogue acts from the current turn, user requested slots, the user informed slots, the agent requested slots and agent proposed slots. We treat the dialogue state as a sequence and adopt a GRU BIBREF20 to encode it. The encoded dialogue state is a sequence of vectors $\\mathbf {E} = (e_0, \\ldots , e_l)$ and the last hidden state is $h^{E}$. The CAS decoder recurrently generates tuples at each step. It takes $h^{E}$ as initial hidden state $h_0$. At each decoding step, the input contains the previous (continue, act, slots) tuple $(c_{t-1},a_{t-1},s_{t-1})$. An additional vector $k$ containing the number of results from the knowledge base (KB) query and the current turn number is given as input. The output of the decoder at each step is a tuple $(c, a, s)$, where $c \\in \\lbrace \\langle \\text{continue} \\rangle , \\langle \\text{stop} \\rangle , \\langle \\text{pad} \\rangle \\rbrace $, $a \\in A$ (one act from the act set), and $s \\subset S$ (a subset from the slot set).", "" ], [ "As shown in Figure FIGREF7, the gated CAS cell contains three sequentially connected units for outputting continue, act, and slots respectively.", "The Continue unit maps the previous tuple $(c_{t-1}, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^c$. The hidden state from the previous step $h_{t-1}$ and $x_t^c$ are inputs to a $\\text{GRU}^c$ unit that produces output $g_t^c$ and hidden state $h_t^c$. Finally, $g_t^c$ is used to predict $c_t$ through a linear projection and a $\\text{softmax}$.", "The Act unit maps the tuple $(c_t, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^a$. The hidden state from the continue cell $h_t^c$ and $x_t^a$ are inputs to a $\\text{GRU}^a$ unit that produces output $g_t^a$ and hidden state $h_t^a$. Finally, $g_t^a$ is used to predict $a_t$ through a linear projection and a $\\text{softmax}$.", "The Slots unit maps the tuple $(c_t, a_t, s_{t-1})$ and the KB vector $k$ into $x_t^s$. The hidden state from the act cell $h_t^a$ and $x_t^s$ are inputs to a $\\text{GRU}^s$ unit that produces output $g_t^s$ and hidden state $h_t^s$. Finally, $g_t^a$ is used to predict $s_t$ through a linear projection and a $\\text{sigmoid}$. Let $z_t^i$ be the $i$-th slot's ground truth.", "The overall loss is the sum of the losses of the three units: $\\mathcal {L} = \\mathcal {L}^c + \\mathcal {L}^a + \\mathcal {L}^s$" ], [ "The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act). The size of the sets of acts, slots, and act-slot pairs are also listed in Table TABREF11. Table TABREF12 shows the count of turns with multiple act annotations, which amounts to 23% of the dataset. We use MSR's dialogue management code and knowledge base to obtain the state at each turn and use it as input to every model." ], [ "We evaluate the performance at the act, frame and task completion level. For a frame to be correct, both the act and all the slots should match the ground truth. We report precision, recall, F$_1$ score of turn-level acts and frames. For task completion evaluation, Entity F$_1$ score and Success F$_1$ score BIBREF21 are reported. The Entity F$_1$ score, differently from the entity match rate in state tracking, compares the slots requested by the agent with the slots the user informed about and that were used to perform the KB query. We use it to measure agent performance in requesting information. The Success F$_1$ score compares the slots provided by the agent with the slots requested by the user. We use it to measure the agent performance in providing information.", "Critical slots and Non-critical slots: By `non-critical', we mean slots that the user informs the system about by providing their values and thus it is not critical for the system to provide them in the output. Table 1 shows an example, with the genre slot provided by the user and the system repeating it in its answer. Critical slots refers to slots that the system must provide like “moviename” in the Table 1 example. Although non-critical slots do not impact task completion directly, they may influence the output quality by enriching the dialogue state and helping users understand the system's utterance correctly. Furthermore, given the same dialog state, utterances offering non-critical slots or not offering them can both be present in the dataset, as they are optional. This makes the prediction of those slots more challenging for the system. To provide a more detailed analysis, we report the precision, recall, F$_1$ score of turn-level for all slots, critical slots and non-critical slots of the inform act." ], [ "We compare five methods on the multi-act task.", "Classification replicates the MSR challenge BIBREF19 policy network architecture: two fully connected layers. We replace the last activation from $\\text{softmax}$ to $\\text{sigmoid}$ in order to predict probabilities for each act-slot pair. It is equivalent to binary classification for each act-slot pair and the loss is the sum of the binary cross-entropy of all of them.", "Seq2Seq BIBREF12 encodes the dialogue state as a sequence, and decodes agent acts as a sequence with attention BIBREF22.", "Copy Seq2Seq BIBREF23 adds a copy mechanism to Seq2Seq, which allows copying words from the encoder input.", "CAS adopts a single GRU BIBREF20 for decoding and uses three different fully connected layers for mapping the output of the GRU to continue, act and slots. For each step in the sequence of CAS tuples, given the output of the GRU, continue, act and slot predictions are obtained by separate heads, each with one fully connected layer. The hidden state of the GRU and the predictions at the previous step are passed to the cell at the next step connecting them sequentially.", "gCAS uses our proposed recurrent cell which contains separate continue, act and slots unit that are sequentially connected.", "The classification architecture has two fully connected layers of size 128, and the remaining models have a hidden size of 64 and a teacher-forcing rate of 0.5. Seq2Seq and Copy Seq2Seq use a beam search with beam size 10 during inference. CAS and gCAS do not adopt a beam search since their inference steps are much less than Seq2Seq methods. All models use Adam optimizer BIBREF24 with a learning rate of 0.001." ], [ "As shown in Table TABREF13, gCAS outperforms all other methods on Entity F$_1$ in all three domains. Compared to Seq2Seq, the performance advantage of gCAS in the taxi and restaurant domains is small, while it is more evident in the movie domain. The reason is that in the movie domain the proportion of turns with multiple acts is higher (52%), while in the other two domains it is lower (30%). gCAS also outperforms all other models in terms of Success F$_1$ in the movie and restaurant domain but is outperformed by the classification model in the taxi domain. The reason is that in the taxi domain, the agent usually informs the user at the last turn, while in all previous turns the agent usually requests information from the user. It is easy for the classification model to overfit this pattern. The advantage of gCAS in the restaurant domain is much more evident: the agent's inform act usually has multiple slots (see example 2 in Table TABREF15) and this makes classification and sequence generation harder, but gCAS multi-label slots decoder handles it easily.", "Table TABREF14 shows the turn-level acts and frame prediction performance. CAS and gCAS outperform all other models in acts prediction in terms of F$_1$ score. The main reason is that CAS and gCAS output a tuple at each recurrent step, which makes for shorter sequences that are easier to generate compared to the long sequences of Seq2Seq (example 2 in Table TABREF15). The classification method has a good precision score, but a lower recall score, suggesting it has problems making granular decisions (example 2 in Table TABREF15). At the frame level, gCAS still outperforms all other methods. The performance difference between CAS and gCAS on frames becomes much more evident, suggesting that gCAS is more capable of predicting slots that are consistent with the act. This finding is also consistent with their Entity F$_1$ and Success F$_1$ performance.", "However, gCAS's act-slot pair performance is far from perfect. The most common failure case is on non-critical slots (like `genre' in the example in Table TABREF4): gCAS does not predict them, while it predicts the critical ones (like `moviename' in the example in Table TABREF4).", "Table TABREF15 shows predictions of all methods from two emblematic examples. Example 1 is a frequent single-act multi-slots agent act. Example 2 is a complex multi-act example. The baseline classification method can predict frequent pairs in the dataset, but cannot predict any act in the complex example. The generated sequences of Copy Seq2Seq and Seq2Seq show that both models struggle in following the syntax. CAS cannot predict slots correctly even if the act is common in the dataset. gCAS returns a correct prediction for Example 1, but for Example 2 gCAS cannot predict `starttime', which is a non-critical slot.", "Tables TABREF16 and TABREF17 show the results of all slots, critical slots and non-critical slots under the inform act. gCAS performs better than the other methods on all slots in the movie and restaurant domains. The reason why classification performs the best here in the taxi domain is the same as the Success F$_1$. In the taxi domain, the agent usually informs the user at the last turn. The non-critical slots are also repeated frequently in the taxi domain, which makes their prediction easier. gCAS's performance is close to other methods on critical-slots. The reason is that the inform act is mostly the first act in multi-act and critical slots are usually frequent in the data. All methods can predict them well.", "In the movie and restaurant domains, the inform act usually appears during the dialogue and there are many optional non-critical slots that can appear (see Table TABREF11, movie and restaurant domains have more slots and pairs than the taxi domain). gCAS can better predict the non-critical slots than other methods. However, the overall performance on non-critical slots is much worse than critical slots since their appearances are optional and inconsistent in the data." ], [ "In this paper, we introduced a multi-act dialogue policy model motivated by the need for a richer interaction between users and conversation agents. We studied classification and sequence generation methods for this task, and proposed a novel recurrent cell, gated CAS, which allows the decoder to output a tuple at each step. Experimental results showed that gCAS is the best performing model for multi-act prediction. The CAS decoder and the gCAS cell can also be used in a user simulator and gCAS can be applied in the encoder. A few directions for improvement have also been identified: 1) improving the performance on non-critical slots, 2) tuning the decoder with RL, 3) text generation from gCAS. We leave them as future work." ], [ "We would like to express our special thanks to Alexandros Papangelis and Gokhan Tur for their support and contribution. We also would like to thank Xiujun Li for his help on dataset preparation and Jane Hung for her valuable comments. Bing Liu is partially supported by the NSF grant IIS-1910424 and a research gift from Northrop Grumman." ] ], "section_name": [ "Introduction", "Methodology", "Methodology ::: gCAS Cell", "Experiments", "Experiments ::: Evaluation Metrics", "Experiments ::: Baseline", "Experiments ::: Result and Error Analysis", "Conclusion and Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "b8a64a4d67c7487ae8a64a3e227bfde8f3fa45d9" ], "answer": [ { "evidence": [ "The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act). The size of the sets of acts, slots, and act-slot pairs are also listed in Table TABREF11. Table TABREF12 shows the count of turns with multiple act annotations, which amounts to 23% of the dataset. We use MSR's dialogue management code and knowledge base to obtain the state at each turn and use it as input to every model." ], "extractive_spans": [], "free_form_answer": "Microsoft Research dataset containing movie, taxi and restaurant domains.", "highlighted_evidence": [ "The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "4a86da3c805824f977f954f79658847b160b7a6f" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 5: Entity F1 and Success F1 at dialogue level." ], "extractive_spans": [], "free_form_answer": "For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52", "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Entity F1 and Success F1 at dialogue level." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "edb2dbb401908c99dd1d3a59a62b342594714816" ], "answer": [ { "evidence": [ "In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\\textit {continue}, \\textit {act}, \\textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them." ], "extractive_spans": [], "free_form_answer": "It has three sequentially connected units to output continue, act and slots generating multi-acts in a doble recurrent manner.", "highlighted_evidence": [ "In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\\textit {continue}, \\textit {act}, \\textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no" ], "question": [ "What datasets are used for training/testing models? ", "How better is gCAS approach compared to other approaches?", "What is specific to gCAS cell?" ], "question_id": [ "6f2118a0c64d5d2f49eee004d35b956cb330a10e", "8a0a51382d186e8d92bf7e78277a1d48958758da", "b8dea4a98b4da4ef1b9c98a211210e31d6630cf3" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Dialogue example.", "Figure 1: CAS decoder: at each step, a tuple of (continue, act, slots) is produced. The KB vector k regarding the queried result from knowledge base is not shown for brevity.", "Figure 2: The gated CAS recurrent cell contains three units: continue unit, act unit and slots unit. The three units use a gating mechanism and are sequentially connected. The KB vector k is not shown for brevity.", "Table 5: Entity F1 and Success F1 at dialogue level.", "Table 4: Dialogue act counts by turn.", "Table 3: Dataset: train, validation and test split, and the count of distinct acts, slots and act-slot pairs.", "Table 2: Multiple dialogue act format in different architectures.", "Table 6: Precision (P), Recall (R) and F1score (F1) of turn-level acts and frames.", "Table 7: Examples of predicted dialogue acts in the restaurant domain.", "Table 8: P ,R and F1 of turn-level inform all slots and non-critical slots.", "Table 9: P , R and F1 of turn-level inform critical slots." ], "file": [ "1-Table1-1.png", "2-Figure1-1.png", "2-Figure2-1.png", "3-Table5-1.png", "3-Table4-1.png", "3-Table3-1.png", "3-Table2-1.png", "4-Table6-1.png", "4-Table7-1.png", "5-Table8-1.png", "5-Table9-1.png" ] }
[ "What datasets are used for training/testing models? ", "How better is gCAS approach compared to other approaches?", "What is specific to gCAS cell?" ]
[ [ "1908.11546-Experiments-0" ], [ "1908.11546-3-Table5-1.png" ], [ "1908.11546-Introduction-2" ] ]
[ "Microsoft Research dataset containing movie, taxi and restaurant domains.", "For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52", "It has three sequentially connected units to output continue, act and slots generating multi-acts in a doble recurrent manner." ]
550
1905.10238
Incorporating Context and External Knowledge for Pronoun Coreference Resolution
Linking pronominal expressions to the correct references requires, in many cases, better analysis of the contextual information and external knowledge. In this paper, we propose a two-layer model for pronoun coreference resolution that leverages both context and external knowledge, where a knowledge attention mechanism is designed to ensure the model leveraging the appropriate source of external knowledge based on different context. Experimental results demonstrate the validity and effectiveness of our model, where it outperforms state-of-the-art models by a large margin.
{ "paragraphs": [ [ "The question of how human beings resolve pronouns has long been of interest to both linguistics and natural language processing (NLP) communities, for the reason that pronoun itself has weak semantic meaning BIBREF0 and brings challenges in natural language understanding. To explore solutions for that question, pronoun coreference resolution BIBREF1 was proposed. As an important yet vital sub-task of the general coreference resolution task, pronoun coreference resolution is to find the correct reference for a given pronominal anaphor in the context and has been shown to be crucial for a series of downstream tasks BIBREF2 , including machine translation BIBREF3 , summarization BIBREF4 , information extraction BIBREF5 , and dialog systems BIBREF6 .", "Conventionally, people design rules BIBREF1 , BIBREF7 , BIBREF8 or use features BIBREF9 , BIBREF10 , BIBREF11 to resolve the pronoun coreferences. These methods heavily rely on the coverage and quality of the manually defined rules and features. Until recently, end-to-end solution BIBREF12 was proposed towards solving the general coreference problem, where deep learning models were used to better capture contextual information. However, training such models on annotated corpora can be biased and normally does not consider external knowledge.", "Despite the great efforts made in this area in the past few decades BIBREF1 , BIBREF8 , BIBREF9 , BIBREF13 , pronoun coreference resolution remains challenging. The reason behind is that the correct resolution of pronouns can be influenced by many factors BIBREF0 ; many resolution decisions require reasoning upon different contextual and external knowledge BIBREF14 , which is also proved in other NLP tasks BIBREF15 , BIBREF16 , BIBREF17 . Figure 1 demonstrates such requirement with three examples, where Example A depends on the plurality knowledge that `them' refers to plural noun phrases; Example B illustrates the gender requirement of pronouns where `she' can only refer to a female person (girl); Example C requires a more general type of knowledge that `cats can climb trees but a dog normally does not'. All of these knowledge are difficult to be learned from training data. Considering the importance of both contextual information and external human knowledge, how to jointly leverage them becomes an important question for pronoun coreference resolution.", "In this paper, we propose a two-layer model to address the question while solving two challenges of incorporating external knowledge into deep models for pronoun coreference resolution, where the challenges include: first, different cases have their knowledge preference, i.e., some knowledge is exclusively important for certain cases, which requires the model to be flexible in selecting appropriate knowledge per case; second, the availability of knowledge resources is limited and such resources normally contain noise, which requires the model to be robust in learning from them.", "Consequently, in our model, the first layer predicts the relations between candidate noun phrases and the target pronoun based on the contextual information learned by neural networks. The second layer compares the candidates pair-wisely, in which we propose a knowledge attention module to focus on appropriate knowledge based on the given context. Moreover, a softmax pruning is placed in between the two layers to select high confident candidates. The architecture ensures the model being able to leverage both context and external knowledge. Especially, compared with conventional approaches that simply treat external knowledge as rules or features, our model is not only more flexible and effective but also interpretable as it reflects which knowledge source has the higher weight in order to make the decision. Experiments are conducted on a widely used evaluation dataset, where the results prove that the proposed model outperforms all baseline models by a great margin.", "Above all, to summarize, this paper makes the following contributions:" ], [ "Following the conventional setting BIBREF1 , the task of pronoun coreference resolution is defined as: for a pronoun $p$ and a candidate noun phrase set ${\\mathcal {N}}$ , the goal is to identify the correct non-pronominal references set ${\\mathcal {C}}$ . the objective is to maximize the following objective function: ", "$${\\mathcal {J}}= \\frac{\\sum _{c \\in {\\mathcal {C}}}{e^{F(c, p)}}}{\\sum _{n \\in {\\mathcal {N}}}e^{F(n, p)}},$$ (Eq. 8) ", "where $c$ is the correct reference and $n$ the candidate noun phrase. $F(\\cdot )$ refers to the overall coreference scoring function for each $n$ regarding $p$ . Following BIBREF8 , all non-pronominal noun phrases in the recent three sentences of the pronoun $p$ are selected to form $N$ .", "Particularly in our setting, we want to leverage both the local contextual information and external knowledge in this task, thus for each $n$ and $p$ , $F(.)$ is decomposed into two components: ", "$$F(n, p) = F_c(n, p) + F_k(n, p),$$ (Eq. 10) ", "where $F_c(n, p)$ is the scoring function that predicts the relation between $n$ and $p$ based on the contextual information; $F_k(n, p)$ is the scoring function that predicts the relation between $n$ and $p$ based on the external knowledge. There could be multiple ways to compute $F_c$ and $F_k$ , where a solution proposed in this paper is described as follows." ], [ "The architecture of our model is shown in Figure 2 , where we use two layers to incorporate contextual information and external knowledge. Specifically, the first layer takes the representations of different $n$ and the $p$ as input and predict the relationship between each pair of $n$ and $p$ , so as to compute $F_c$ . The second layer leverages the external knowledge to compute $F_k$ , which consists of pair-wise knowledge score $f_k$ among all candidate $n$ . To enhance the efficiency of the model, a softmax pruning module is applied to select high confident candidates into the second layer. The details of the aforementioned components are described in the following subsections." ], [ "Before $F_c$ is computed, the contextual information is encoded through a span representation (SR) module in the first layer of the model. Following BIBREF12 , we adopt the standard bidirectional LSTM (biLSTM) BIBREF18 and the attention mechanism BIBREF19 to generate the span representation, as shown in Figure 3 . Given that the initial word representations in a span $n_i$ are ${\\bf x}_1,...,{\\bf x}_T$ ,", "we denote their representations ${\\bf x}^*_1,...,{\\bf x}^*_T$ after encoded by the biLSTM.", "Then we obtain the inner-span attention by ", "$$a_t = \\frac{e^{\\alpha _t}}{\\sum _{k=1}^{T}e^{\\alpha _k}},$$ (Eq. 14) ", "where $\\alpha _t$ is computed via a standard feed-forward neural network $\\alpha _t$ = $NN_\\alpha ({\\bf x}^*_t)$ . Thus, we have the weighted embedding of each span $\\hat{x}_i$ through ", "$$\\hat{{\\bf x}}_i = \\sum _{k=1}^{T}a_k \\cdot {\\bf x}_k.$$ (Eq. 16) ", "Afterwards, we concatenate the starting ( ${\\bf x}^*_{start}$ ) and ending ( ${\\bf x}^*_{end}$ ) embedding of each span, as well as its weighted embedding ( $\\hat{{\\bf x}}_i$ ) and the length feature ( $\\phi (i)$ ) to form its final representation $e$ : ", "$${\\bf e}_i = [{\\bf x}^*_{start},{\\bf x}^*_{end},\\hat{{\\bf x}}_i,\\phi (i)].$$ (Eq. 17) ", "Once the span representation of $n \\in {\\mathcal {N}}$ and $p$ are obtained, we compute $F_c$ for each $n$ with a standard feed-forward neural network: ", "$$F_c(n, p) = NN_c([{\\bf e}_n, {\\bf e}_p, {\\bf e}_n \\odot {\\bf e}_p]),$$ (Eq. 18) ", "where $\\odot $ is the element-wise multiplication." ], [ "In the second layer of our model, external knowledge is leveraged to evaluate all candidate $n$ so as to give them reasonable $F_k$ scores. In doing so, each candidate is represented as a group of features from different knowledge sources, e.g., `the cat' can be represented as a singular noun, unknown gender creature, and a regular subject of the predicate verb `climb'. For each candidate, we conduct a series of pair-wise comparisons between it and all other ones to result in its $F_k$ score. An attention mechanism is proposed to perform the comparison and selectively use the knowledge features. Consider there exists noise in external knowledge, especially when it is automatically generated, such attention mechanism ensures that, for each candidate, reliable and useful knowledge is utilized rather than ineffective ones. The details of the knowledge attention module and the overall scoring are described as follows.", "Knowledge Attention Figure 4 demonstrates the structure of the knowledge attention module, where there are two components: (1) weighting: assigning weights to different knowledge features regarding their importance in the comparison; (2) scoring: valuing a candidate against another one based on their features from different knowledge sources. Assuming that there are $m$ knowledge sources input to our model, each candidate can be represented by $m$ different features, which are encoded as embeddings. Therefore, two candidates $n$ and $n^\\prime $ regarding $p$ have their knowledge feature embeddings ${\\bf k}_{n,p}^1, {\\bf k}_{n,p}^2, ..., {\\bf k}_{n,p}^m$ and ${\\bf k}_{n^\\prime ,p}^1,{\\bf k}_{n^\\prime ,p}^2,...,{\\bf k}_{n^\\prime ,p}^m$ , respectively. The weighting component receives all features ${\\bf k}$ for $n$ and $n^\\prime $ , and the span representations $m$0 and $m$1 as input, where $m$2 and $m$3 help selecting appropriate knowledge based on the context. As a result, for a candidate pair ( $m$4 , $m$5 ) and a knowledge source $m$6 , its knowledge attention score is computed via ", "$$\\beta _i(n, n^\\prime , p) = NN_{ka}([{\\bf o}_{n,p}^i, {\\bf o}_{n^\\prime ,p}^i, {\\bf o}_{n,p}^i \\odot {\\bf o}_{n^\\prime ,p}^i]),$$ (Eq. 21) ", "where $ {\\bf o}_{n,p}^i= [{\\bf e}_n, {\\bf k}_{n,p}^i]$ and ${\\bf o}_{n^\\prime ,p}^i = [{\\bf e}_{n^\\prime }, {\\bf k}_{n^\\prime ,p}^i]$ are the concatenation of span representation and external knowledge embedding for candidate $n$ and $n^\\prime $ respectively. The weight for features from different knowledge sources is thus computed via ", "$$w_i = \\frac{e^{\\beta _i}}{\\sum _{j=1}^{m}e^{\\beta _j}}.$$ (Eq. 22) ", "Similar to the weighting component, for each feature $i$ , we compute its score $f_k^i(n, n^\\prime , p)$ for $n$ against $n^\\prime $ in the scoring component through ", "$$f_k^i(n, n^\\prime , p) = NN_{ks}([{\\bf k}_{n,p}^i, {\\bf k}_{n^\\prime ,p}^i, {\\bf k}_{n,p}^i \\odot {\\bf k}_{n^\\prime ,p}^i]).$$ (Eq. 23) ", "where it is worth noting that we exclude ${\\bf e}$ in this component for the reason that, in practice, the dimension of ${\\bf e}$ is normally much higher than ${\\bf k}$ . As a result, it could dominate the computation if ${\\bf e}$ and ${\\bf k}$ is concatenated.", "Once the weights and scores are obtained, we have a weighted knowledge score for $n$ against $n^\\prime $ : ", "$$f_k(n, n^\\prime , p) = \\sum _{i=1}^{m}w_i \\cdot f_k^i(n, n^\\prime , p).$$ (Eq. 25) ", "Overall Knowledge Score After all pairs of $n$ and $n^\\prime $ are processed by the attention module, the overall knowledge score for $n$ is computed through the averaged $f_k(n, n^\\prime , p)$ over all $n^\\prime $ : ", "$$F_k(n, p) = \\frac{\\sum _{n^\\prime \\in {\\mathcal {N}}_o} f_k(n, n^\\prime , p)}{|{\\mathcal {N}}_o|},$$ (Eq. 26) ", "where ${\\mathcal {N}}_o = {\\mathcal {N}}- n$ for each $n$ ." ], [ "Normally, there could be many noun phrases that serve as the candidates for the target pronoun. One potential obstacle in the pair-wise comparison of candidate noun phrases in our model is the squared complexity $O(|{\\mathcal {N}}|^2)$ with respect to the size of ${\\mathcal {N}}$ . To filter out low confident candidates so as to make the model more efficient, we use a softmax-pruning module between the two layers in our model to select candidates for the next step. The module takes $F_c$ as input for each $n$ , uses a softmax computation: ", "$$\\hat{F}_c(n, p) = \\frac{e^{F_c(n, p)}}{\\sum _{n_i \\in {\\mathcal {N}}}e^{F_c(n_i, p)}}.$$ (Eq. 28) ", "where candidates with higher $\\hat{F}_c$ are kept, based on a threshold $t$ predefined as the pruning standard. Therefore, if candidates have similar $F_c$ scores, the module allow more of them to proceed to the second layer. Compared with other conventional pruning methods BIBREF12 , BIBREF20 that generally keep a fixed number of candidates, our pruning strategy is more efficient and flexible." ], [ "The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0. Following conventional approaches BIBREF9 , BIBREF11 , for each pronoun in the document, we consider candidate $n$ from the previous two sentences and the current sentence. For pronouns, we consider two types of them following BIBREF9 , i.e., third personal pronoun (she, her, he, him, them, they, it) and possessive pronoun (his, hers, its, their, theirs). Table 1 reports the number of the two type pronouns and the overall statistics for the experimental dataset. According to our selection range of candidate $n$ , on average, each pronoun has 4.6 candidates and 1.3 correct references." ], [ "In this study, we use two types of knowledge in our experiments. The first type is linguistic features, i.e., plurality and animacy & gender. We employ the Stanford parser, which generates plurality, animacy, and gender markups for all the noun phrases, to annotate our data. Specifically, the plurality feature denotes each $n$ and $p$ to be singular or plural. For each candidate $n$ , if its plurality status is the same as the target pronoun, we label it 1, otherwise 0. The animacy & gender (AG) feature denotes whether a $n$ or $p$ is a living object, and being male, female, or neutral if it is alive. For each candidate $n$ , if its AG feature matches the target pronoun's, we label it 1, otherwise 0.", "The second type is the selectional preference (SP) knowledge. For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength. Specifically, we use the English Wikipedia as the base corpus for such counting. Then we parse the entire corpus through the Stanford parser and record all dependency edges in the format of (predicate, argument, relation, number), where predicate is the governor and argument the dependent in the original parsed dependency edge. Later for sentences in the training and test data, we firstly parse each sentence and find out the dependency edge linking $p$ and its corresponding predicate. Then for each candidate $n$ in a sentence, we check the previously created SP knowledge base and find out how many times it appears as the argument of different predicates with the same dependency relation (i.e., nsubj and dobj). The resulted frequency is grouped into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and we use the bucket id as the final SP knowledge. Thus in the previous example:", "The dog is chasing the cat but it climbs the tree.", "Its parsing result indicates that `it' is the subject of the verb `climb'. Then for `the dog', `the cat', and `the tree', we check their associations with `climb' in the knowledge base and group them in the buckets to form the SP knowledge features." ], [ "Several baselines are compared in this work. The first two are conventional unsupervised ones:", "[leftmargin=*]", "Recent Candidate, which simply selects the most recent noun phrase that appears in front of the target pronoun.", "Deterministic model BIBREF22 , which proposes one multi-pass seive model with human designed rules for the coreference resolution task.", "Besides the unsupervised models, we also compare with three representative supervised ones:", "[leftmargin=*]", "Statistical model, proposed by BIBREF23 , uses human-designed entity-level features between clusters and mentions for coreference resolution.", "Deep-RL model, proposed by BIBREF24 , a reinforcement learning method to directly optimize the coreference matrix instead of the traditional loss function.", "End2end is the current state-of-the-art coreference model BIBREF20 , which performs in an end-to-end manner and leverages both the contextual information and a pre-trained language model BIBREF25 .", "Note that the Deterministic, Statistical, and Deep-RL models are included in the Stanford CoreNLP toolkit, and experiments are conducted with their provided code. For End2end, we use their released code and replace its mention detection component with gold mentions for the fair comparison.", "To clearly show the effectiveness of the proposed model, we also present a variation of our model as an extra baseline to illustrate the effect of different knowledge incorporation manner:", "[leftmargin=*]", "Feature Concatenation, a simplified version of the complete model that removes the second knowledge processing layer, but directly treats all external knowledge embeddings as features and concatenates them to span representations." ], [ "Following previous work BIBREF20 , we use the concatenation of the 300d GloVe embeddings BIBREF26 and the ELMo BIBREF25 embeddings as the initial word representations. Out-of-vocabulary words are initialized with zero vectors. Hyper-parameters are set as follows. The hidden state of the LSTM module is set to 200, and all the feed-forward networks in our model have two 150-dimension hidden layers. The default pruning threshold $t$ for softmax pruning is set to $10^{-7}$ . All linguistic features (plurality and AG) and external knowledge (SP) are encoded as 20-dimension embeddings.", "For model training, we use cross-entropy as the loss function and Adam BIBREF27 as the optimizer. All the aforementioned hyper-parameters are initialized randomly, and we apply dropout rate 0.2 to all hidden layers in the model. Our model treats a candidate as the correct reference if its predicted overall score $F(n,p)$ is larger than 0. The model training is performed with up to 100 epochs, and the best one is selected based on its performance on the development set." ], [ "Table 2 compares the performance of our model with all baselines. Overall, our model performs the best with respect to all evaluation metrics. Several findings are also observed from the results. First, manually defined knowledge and features are not enough to cover rich contextual information. Deep learning models (e.g., End2end and our proposed models), which leverage text representations for context, outperform other approaches by a great margin, especially on the recall. Second, external knowledge is highly helpful in this task, which is supported by that our model outperforms the End2end model significantly.", "Moreover, the comparison between the two variants of our models is also interesting, where the final two-layer model outperforms the Feature Concatenation model. It proves that simply treating external knowledge as the feature, even though they are from the same sources, is not as effective as learning them in a joint framework. The reason behind this result is mainly from the noise in the knowledge source, e.g., parsing error, incorrectly identified relations, etc. For example, the plurality of 17% noun phrases are wrongly labeled in the test data. As a comparison, our knowledge attention might contribute to alleviate such noise when incorporating all knowledge sources.", "Effect of Different Knowledge To illustrate the importance of different knowledge sources and the knowledge attention mechanism, we ablate various components of our model and report the corresponding F1 scores on the test data. The results are shown in Table 3 , which clearly show the necessity of the knowledge. Interestingly, AG contributes the most among all knowledge types, which indicates that potentially more cases in the evaluation dataset demand on the AG knowledge than others. More importantly, the results also prove the effectiveness of the knowledge attention module, which contributes to the performance gap between our model and the Feature Concatenation one.", "Effect of Different Pruning Thresholds We try different thresholds $t$ for the softmax pruning in selecting reliable candidates. The effects of different thresholds on reducing candidates and overall performance are shown in Figure 5 and 6 respectively. Along with the increase of $t$ , both the max and the average number of pruned candidates drop quickly, so that the space complexity of the model can be reduced accordingly. Particularly, there are as much as 80% candidates can be filtered out when $t = 10^{-1}$ . Meanwhile, when referring to Figure 6 , it is observed that the model performs stable with the decreasing of candidate numbers. Not surprisingly, the precision rises when reducing candidate numbers, yet the recall drops dramatically, eventually results in the drop of F1. With the above observations, the reason we set $t = 10^{-7}$ as the default threshold is straightforward: on this value, one-third candidates are pruned with almost no influence on the model performance in terms of precision, recall, and the F1 score." ], [ "To further demonstrate the effectiveness of incorporating knowledge into pronoun coreference resolution, two examples are provided for detailed analysis. The prediction results of the End2end model and our complete model are shown in Table 4 . There are different challenges in both examples. In Example A, `Jesus', `man', and `my son' are all similar (male) noun phrases matching the target pronoun `He'. The End2end model predicts all of them to be correct references because their context provides limited help in distinguishing them. In Example B, the distance between `an accident' and the pronoun `it' is too far. As a result, the `None' result from the End2end model indicates that the contextual information is not enough to make the decision. As a comparison, in our model, integrating external knowledge can help to solve such challenges, e.g., for Example A, SP knowledge helps when Plurality and AG cannot distinguish all candidates.", "To clearly illustrate how our model leverages the external knowledge, we visualize the knowledge attention of the correct reference against other candidates via heatmaps in Figure 7 . Two interesting observations are drawn from the visualization. First, given two candidates, if they are significantly different in one feature, our model tends to pay more attention to that feature. Take AG as an example, in Example A, the AG features of all candidates consistently match the pronoun `he' (all male/neutral). Thus the comparison between `my son' and all candidates pay no attention to the AG feature. While in Example B, the target pronoun `it' cannot describe human, thus 'father' and `friend' are 0 on the AG feature while `hospital' and `accident' are 1. As a result, the attention module emphasizes AG more than other knowledge types. Second, The importance of SP is clearly shown in these examples. In example A, Plurality and AG features cannot help, the attention module weights higher on SP because `son' appears 100 times as the argument of the parsed predicate `child' in the SP knowledge base, while other candidates appear much less at that position. In example B, as mentioned above, once AG helps filtering 'hospital' and 'accident', SP plays an important role in distinguishing them because `accident' appears 26 times in the SP knowledge base as the argument of the `fault' from the results of the parser, while `hospital' never appears at that position." ], [ "Coreference resolution is a core task for natural language understanding, where it detects mention span and identifies coreference relations among them. As demonstrated in BIBREF12 , mention detection and coreference prediction are the two major focuses of the task. Different from the general coreference task, pronoun coreference resolution has its unique challenge since the semantics of pronouns are often not as clear as normal noun phrases, in general, how to leverage the context and external knowledge to resolve the coreference for pronouns becomes its focus BIBREF1 , BIBREF14 , BIBREF28 .", "In previous work, external knowledge including manually defined rules BIBREF1 , BIBREF9 , such as number/gender requirement of different pronouns, and world knowledge BIBREF14 , such as selectional preference BIBREF29 , BIBREF30 and eventuality knowledge BIBREF31 , have been proved to be helpful for pronoun coreference resolution. Recently, with the development of deep learning, BIBREF12 proposed an end-to-end model that learns contextual information with an LSTM module and proved that such knowledge is helpful for coreference resolution when the context is properly encoded. The aforementioned two types of knowledge have their own advantages: the contextual information covers diverse text expressions that are difficult to be predefined while the external knowledge is usually more precisely constructed and able to provide extra information beyond the training data. Different from previous work, we explore the possibility of joining the two types of knowledge for pronoun coreference resolution rather than use only one of them. To the best of our knowledge, this is the first attempt that uses deep learning model to incorporate contextual information and external knowledge for pronoun coreference resolution." ], [ "In this paper, we proposed a two-layer model for pronoun coreference resolution, where the first layer encodes contextual information and the second layer leverages external knowledge. Particularly, a knowledge attention mechanism is proposed to selectively leverage features from different knowledge sources. As an enhancement to existing methods, the proposed model combines the advantage of conventional feature-based models and deep learning models, so that context and external knowledge can be synchronously and effectively used for this task. Experimental results and case studies demonstrate the superiority of the proposed model to state-of-the-art baselines. Since the proposed model adopted an extensible structure, one possible future work is to explore the best way to enhance it with more complicated knowledge resources such as knowledge graphs." ], [ "This paper was partially supported by the Early Career Scheme (ECS, No.26206717) from Research Grants Council in Hong Kong. In addition, Hongming Zhang has been supported by the Hong Kong Ph.D. Fellowship and the Tencent Rhino-Bird Elite Training Program. We also thank the anonymous reviewers for their valuable comments and suggestions that help improving the quality of this paper." ] ], "section_name": [ "Introduction", "The Task", "The Model", "Encoding Contextual Information", "Processing External Knowledge", "Softmax Pruning", "Data", "Knowledge Types", "Baselines", "Implementation", "Experimental Results", "Case Study", "Related Work", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "4a8b2c06fd45fcf979f93c96736e5d54824a1515" ], "answer": [ { "evidence": [ "The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0. Following conventional approaches BIBREF9 , BIBREF11 , for each pronoun in the document, we consider candidate $n$ from the previous two sentences and the current sentence. For pronouns, we consider two types of them following BIBREF9 , i.e., third personal pronoun (she, her, he, him, them, they, it) and possessive pronoun (his, hers, its, their, theirs). Table 1 reports the number of the two type pronouns and the overall statistics for the experimental dataset. According to our selection range of candidate $n$ , on average, each pronoun has 4.6 candidates and 1.3 correct references." ], "extractive_spans": [ "CoNLL-2012 shared task BIBREF21 corpus" ], "free_form_answer": "", "highlighted_evidence": [ "The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "4857c606a55a83454e8d81ffe17e05cf8bc4b75f" ] }, { "annotation_id": [ "f78c35233856bf4062e22ded2d512020c7c0e553" ], "answer": [ { "evidence": [ "The second type is the selectional preference (SP) knowledge. For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength. Specifically, we use the English Wikipedia as the base corpus for such counting. Then we parse the entire corpus through the Stanford parser and record all dependency edges in the format of (predicate, argument, relation, number), where predicate is the governor and argument the dependent in the original parsed dependency edge. Later for sentences in the training and test data, we firstly parse each sentence and find out the dependency edge linking $p$ and its corresponding predicate. Then for each candidate $n$ in a sentence, we check the previously created SP knowledge base and find out how many times it appears as the argument of different predicates with the same dependency relation (i.e., nsubj and dobj). The resulted frequency is grouped into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and we use the bucket id as the final SP knowledge. Thus in the previous example:" ], "extractive_spans": [], "free_form_answer": "counts of predicate-argument tuples from English Wikipedia", "highlighted_evidence": [ "For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength. Specifically, we use the English Wikipedia as the base corpus for such counting." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "What dataset do they evaluate their model on?", "What is the source of external knowledge?" ], "question_id": [ "4146e1d8f79902c0bc034695998b724515b6ac81", "42394c54a950bae8cebecda9de68ee78de69dc0d" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "coreference resolution", "coreference resolution" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Pronoun coreference examples, where each example requires different knowledge for its resolution. Blue bold font refers to the target pronoun, where the correct noun reference and other candidates are marked by green underline and brackets, respectively.", "Figure 2: The architecture of the two-layer model for pronoun coreference resolution. The first layer encodes the contextual information for computing Fc. The second layer leverages external knowledge to score Fk. A pruning layer is applied in between the two layers to control computational complexity. The dashed boxes in the first and second layer refer to span representation and knowledge scoring, respectively.", "Figure 3: The structure of span representation. Bidirectional LSTM and inner-span attention mechanism are employed to capture the contextual information.", "Figure 4: The structure of the knowledge attention module. For each feature ki from knowledge source i, the the weighting component predict its weight wi and the scoring component computes its knowledge score f ik. Then a weighted sum is obtained for fk.", "Table 1: Statistics of the evaluation dataset. Number of selected pronouns are reported.", "Table 2: Pronoun coreference resolution performance of different models on the evaluation dataset. Precision (P), recall (R), and F1 score are reported, with the best one in each F1 column marked bold.", "Figure 5: Effect of different thresholds on candidate numbers. Max and Average number of candidates after pruning are represented with solid lines in blue and orange, respectively. Two dashed lines indicate the max and the average number of candidates before pruning.", "Figure 6: Effect of different pruning thresholds on model performance. With the threshold increasing, the precision increases while the recall and F1 drop.", "Table 3: Performance of our model with removing different knowledge sources and knowledge attention.", "Table 4: The comparison of End2end and our model on two examples drawn from the test data. Pronouns are marked as blue bold font. Correct references are indicated in green underline font and other candidates are indicated with brackets. ‘None’ refers to that none of the candidates is predicated as the correct reference.", "Figure 7: Heatmaps of knowledge attention for two examples, where in each example the knowledge attention weights of the correct references against other candidates are illustrated. Darker color refers to higher weight on the corresponding knowledge type." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "5-Table1-1.png", "6-Table2-1.png", "7-Figure5-1.png", "7-Figure6-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Figure7-1.png" ] }
[ "What is the source of external knowledge?" ]
[ [ "1905.10238-Knowledge Types-1" ] ]
[ "counts of predicate-argument tuples from English Wikipedia" ]
551
1905.07464
A Multi-Task Learning Framework for Extracting Drugs and Their Interactions from Drug Labels
Preventable adverse drug reactions as a result of medical errors present a growing concern in modern medicine. As drug-drug interactions (DDIs) may cause adverse reactions, being able to extracting DDIs from drug labels into machine-readable form is an important effort in effectively deploying drug safety information. The DDI track of TAC 2018 introduces two large hand-annotated test sets for the task of extracting DDIs from structured product labels with linkage to standard terminologies. Herein, we describe our approach to tackling tasks one and two of the DDI track, which corresponds to named entity recognition (NER) and sentence-level relation extraction respectively. Namely, our approach resembles a multi-task learning framework designed to jointly model various sub-tasks including NER and interaction type and outcome prediction. On NER, our system ranked second (among eight teams) at 33.00% and 38.25% F1 on Test Sets 1 and 2 respectively. On relation extraction, our system ranked second (among four teams) at 21.59% and 23.55% on Test Sets 1 and 2 respectively.
{ "paragraphs": [ [ "Preventable adverse drug reactions (ADRs) introduce a growing concern in the modern healthcare system as they represent a large fraction of hospital admissions and play a significant role in increased health care costs BIBREF0 . Based on a study examining hospital admission data, it is estimated that approximately three to four percent of hospital admissions are caused by adverse events BIBREF1 ; moreover, it is estimated that between 53% and 58% of these events were due to medical errors BIBREF2 (and are therefore considered preventable). Such preventable adverse events have been cited as the eighth leading cause of death in the U.S., with an estimated fatality rate of between 44,000 and 98,000 each year BIBREF3 . As drug-drug interactions (DDIs) may lead to preventable ADRs, being able to extract DDIs from structured product labeling (SPL) documents for prescription drugs is an important effort toward effective dissemination of drug safety information. The Text Analysis Conference (TAC) is a series of workshops aimed at encouraging research in natural language processing (NLP) and related applications by providing large test collections along with a standard evaluation procedure. The Drug-Drug Interaction Extraction from Drug Labels track of TAC 2018 BIBREF4 , organized by the U.S. Food and Drug Administration (FDA) and U.S. National Library of Medicine (NLM), is established with the goal of transforming the contents of SPLs into a machine-readable format with linkage to standard terminologies.", "We focus on the first two tasks of the DDI track involving named entity recognition (NER) and relation extraction (RE). Task 1 is focused on identifying mentions in the text corresponding to precipitants, interaction triggers, and interaction effects. Precipitants are defined as substances, drugs, or a drug class involved in an interaction. Task 2 is focused on identifying sentence-level interactions; concretely, the goal is to identify the interacting precipitant, the type of the interaction, and outcome of the interaction. The interaction outcome depends on the interaction type as follows. Pharmacodynamic (PD) interactions are associated with a specified effect corresponding to a span within the text that describes the outcome of the interaction. Naturally, it is possible for a precipitant to be involved in multiple PD interactions. Pharmacokinetic (PK) interactions are associated with a label from a fixed vocabulary of National Cancer Institute (NCI) Thesaurus codes indicating various levels of increase/decrease in functional measurements. For example, consider the sentence: “There is evidence that treatment with phenytoin leads to to decrease intestinal absorption of furosemide, and consequently to lower peak serum furosemide concentrations.” Here, phenytoin is involved in a PK interaction with the label drug, furosemide, and the type of PK interaction is indicated by the NCI Thesaurus code C54615 which describes a decrease in the maximum serum concentration (C INLINEFORM0 ) of the label drug. Lastly, unspecified (UN) interactions are interactions with an outcome that is not explicitly stated in the text and usually indicated through cautionary statements. Figure FIGREF1 features a simple example of a PD interaction that is extracted from the drug label for Adenocard, where the precipitant is digitalis and the effect is “ventricular fibrillation.”" ], [ "Herein, we describe the training and testing data involved in this task and the metrics used for evaluation. In Section SECREF5 , we describe our modeling approach, our deep learning architecture, and our training procedure." ], [ "Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively." ], [ "We used the official evaluation metrics for NER and relation extraction based on the standard precision, recall, and F1 micro-averaged over exactly matched entity/relation annotations. For either task, there are two matching criteria: primary and relaxed. For entity recognition, relaxed matching considers only entity bounds while primary matching considers entity bounds as well as the type of the entity. For relation extraction, relaxed matching only considers precipitant drug (and their bounds) while primary matching comprehensively considers precipitant drugs and, for each, the corresponding interaction type and interaction outcome. As relation extraction evaluation takes into account the bounds of constituent entity predictions, relation extraction performance is heavily reliant on entity recognition performance. On the other hand, we note that while NER evaluation considers trigger mentions, triggers are ignored when evaluating relation extraction performance." ], [ "We propose a multi-task learning framework for extracting drug-drug interactions from drug labels. The framework involves branching paths for each training objective (corresponding to sub-tasks) such that parameters of earlier layers (i.e., the context encoder) are shared.", "Since only drugs involved in an interaction (precipitants) are annotated in the ground truth, we model the task of precipitant recognition and interaction type prediction jointly. We accomplish this by reducing the problem to a sequence tagging problem via a novel NER tagging scheme. That is, for each precipitant drug, we additionally encode the associated interaction type. Hence, there are five possible tags: T for trigger, E for effects, and D, K, and U for precipitants with pharmacodynamic, pharmacokinetic, and unspecified interactions respectively. As a preprocesssing step, we identify the label drug in the sentence, if it is mentioned, and bind it to a generic entity token (e.g. “LABELDRUG”). We additionally account for label drug aliases, such as the generic version of a brand-name drug, and bind them to the same entity token. Table TABREF7 shows how the tagging scheme is applied to the simple example in Figure FIGREF1 . A drawback is that simplifying assumptions must be made that will hamper recall; e.g., we only consider non-overlapping mentions (more later).", "Once we have identified the precipitant offsets (as well as of triggers/effects) and the interaction type for each precipitant, we subsequently predict the outcome or consequence of the interaction (if any). To that end, we consider all entity spans annotated with K tags and assign them a label from a static vocabulary of 20 NCI concept codes corresponding to PK consequence (i.e., multiclass classification) based on sentence-context. Likewise, we consider all entity spans annotated with D tags and link them to mention spans annotated with E tags; we accomplish this via binary classification of all pairwise combinations. For entity spans annotated with U tags, no outcome prediction is made.", "Our proposed deep neural network is illustrated in Figure FIGREF8 . We utilize Bi-directional Long Short-Term Memory networks (Bi-LSTMs) and convolutional neural networks (CNNs) designed for natural language processing as building blocks for our architecture BIBREF6 , BIBREF7 . Entity recognition and outcome prediction share common parameters via a Bi-LSTM context encoder that composes a context representation at each timestep based on input words mapped to dense embeddings and character-CNN composed representations. We use the same character-CNN representation as described in a prior work BIBREF8 ; however, in this work, we omit the character type embedding. A Bi-LSTM component is used to annotate IOB tags for joint entity recognition and interaction type prediction (or, NER prediction) while a CNN with two separate dense output layers (one for PK and one for PD interactions) is used for outcome prediction. We consider NER prediction to be the main objective with outcome prediction playing a secondary role. When predicting outcome, the contextual input is arranged such that candidate entity (and effect) mentions are bound to generic tokens; the resulting representation is referred to as “entity-bound word embeddings” in Figure FIGREF8 .", "We denote INLINEFORM0 as an abstract function, representing a standard bi-directional recurrent neural network with LSTM units, where INLINEFORM1 is the number of input vector representations (e.g., word embeddings) in the sequence and INLINEFORM2 and INLINEFORM3 are the dimensionality of the input and output representations respectively. We similarity denote INLINEFORM4 to represent a standard CNN that maps an INLINEFORM5 matrix to a vector representation of length INLINEFORM6 , where INLINEFORM7 is a list of window (or kernel) sizes that are used in the convolution.", "Let the input be a sentence of length INLINEFORM0 represented as a matrix INLINEFORM1 , where each row corresponds to a word embedding of length INLINEFORM2 . Moreover, let INLINEFORM3 represent the word at position INLINEFORM4 of the sentence such that each of the INLINEFORM5 rows correspond to a character embedding of length INLINEFORM6 . The purpose of the context encoder is to encode each word of the input with surrounding linguistic features and long-distance dependency information. To that end, we employ the use of a Bi-LSTM network to encode S as a context matrix INLINEFORM7 where INLINEFORM8 is a hyper-parameter of the network. Concretely, DISPLAYFORM0 ", "where INLINEFORM0 denotes the INLINEFORM1 row of INLINEFORM2 and INLINEFORM3 is the vector concatenation operator. Essentially, for each word, we compose character representations using a CNN with a window size of three and concatenate them to pre-trained word embeddings; we stack the concatenated vectors as rows of a new matrix that is ultimately fed as input to the Bi-LSTM context encoder. The INLINEFORM4 row of INLINEFORM5 , denoted as INLINEFORM6 , represents the entire context centered at the INLINEFORM7 word. As an implementation detail, we chose INLINEFORM8 and INLINEFORM9 to be the maximum sentence and word length (according to the training data) respectively and pad shorter examples with zero vectors.", "The network for the NER objective manifests as a stacked Bi-LSTM architecture when we consider both the context encoder and the entity recognition component. Borrowing from residual networks BIBREF9 , we re-inforce the input by concatenating word embeddings to the intermediate context vectors before feeding it to the second Bi-LSTM layer. Concretely, the final entity recognition matrix INLINEFORM0 is composed such that DISPLAYFORM0 ", "The output at each position INLINEFORM0 is INLINEFORM1 ", "where INLINEFORM0 is the INLINEFORM1 row of INLINEFORM2 and INLINEFORM3 and INLINEFORM4 are network parameters such that INLINEFORM5 denotes the number of possible IOB tags such as O, B-K, I-K and so on. In order to obtain a categorical distribution, we apply the SoftMax function to INLINEFORM6 such that INLINEFORM7 ", "where INLINEFORM0 is the vector of probability estimates serving as a categorical distribution over INLINEFORM1 tags for the word at position INLINEFORM2 . We optimize by computing the standard categorical cross-entropy loss for each of the INLINEFORM3 individual tag predictions. The final loss to be optimized is the mean over all INLINEFORM4 individually-computed losses.", "A stacked Bi-LSTM architecture improves over a single Bi-LSTM architecture given its capacity to learn deep contextualized embeddings. While we showed that the stacked approach is better for this particular task in Section SECREF19 , it is not necessarily the case that a stacked approach is better in general. We offer an alternative explanation and motivation for using a stacked architecture for this particular problem based on our initial intuition as follows. First, we note that a standalone Bi-LSTM is not able to handle the inference aspect of NER, which entails learning IOB constraints. As an example, in the IOB encoding scheme, it is not possible for a I-D tag to immediately follow a B-E tag; in this way, the prediction of a tag is directly dependent on the prediction of neighboring tags. This inference aspect is typically handled by a linear-chain CRF. We believe that a stacked Bi-LSTM at least partially handles this aspect in the sense that the first Bi-LSTM (the context encoder) is given the opportunity to form independent preliminary decisions while the second Bi-LSTM is tasked with to making final decisions (based on preliminary ones) that are more globally consistent with respect to IOB constraints.", "To predict outcome, we construct a secondary branch in the network path that involves convolving over the word and context embeddings made available in earlier layers. We first define a relation representation INLINEFORM0 that is produced by convolving with window sizes 3, 4, and 5 over the context vectors concatenated to entity-bound versions of the original input; concretely, INLINEFORM1 ", "where INLINEFORM0 is the entity-bound version of INLINEFORM1 . Based on this outcome representation, we compose two separate softmax outputs: one for PK interactions and one for PD interactions. Concretely, the output layers are INLINEFORM2 ", "and INLINEFORM0 ", "where INLINEFORM0 and INLINEFORM1 are probability estimates serving as a categorical distribution over the outcome label space for PD and PK respectively and INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are parameters of the network. For PK, INLINEFORM6 given there are 20 possible NCI Thesaurus codes corresponding to PK outcomes. For PD, INLINEFORM7 as it is a binary classification problem to assess whether the precipitant and effect pair encoded by INLINEFORM8 are linked. We optimize using the standard categorical cross-entropy loss on both objectives.", "In NLM-180, there is no distinction between triggers and effects; moreover, PK effects are limited to coarse-grained (binary) labels corresponding to increase or decrease in function measurements. Hence, a direct mapping from NLM-180 to Training-22 is impossible. As a compromise, NLM-180 “triggers” were mapped to Training-22 triggers in the case of unspecified and PK interactions. For PD interactions, we instead mapped NLM-180 “triggers” to Training-22 effects, which we believe to be appropriate based on our manual analysis of the data. Since we do not have both trigger and effect for every PD interaction, we opted to ignore trigger mentions altogether in the case of PD interactions to avoid introducing mixed signals. While trigger recognition has no bearing on relation extraction performance, this policy has the effect of reducing the recall upperbound on NER by about 25% (more later on upperbound). To overcome the lack of fine-grained annotations for PK outcome in NLM-180, we deploy the well-known bootstrapping approach BIBREF10 to incrementally annotate NLM-180 PK outcomes using Training-22 annotations as seed examples. To mitigate the problem of semantic drift, in each bootstrap cycle, we re-annotated by hand predictions that were not consistent with the original NLM-180 coarse annotations (i.e., active learning BIBREF11 ).", "We train the three objective losses (NER, PK outcome, and PD outcome) in an interleaved fashion at the minibatch BIBREF12 level. We use word embeddings of size 200 pre-trained on the PubMed corpus BIBREF13 as input to the network; these are further modified during back-propagation. For the character-level CNN, we set the character embedding size to 24 with 50 filters over a window size of 3; the final character-CNN composition is therefore of length 50. For each Bi-LSTM, the hidden size is set to 100 such that context vectors are 200 in length. For outcome prediction, we used window sizes of 3, 4, and 5 with 50 filters per window size; the final vector representation for outcome prediction is therefore 150 in length.", "A held-out development set of 4 drug labels is used for tuning and validation. The models are trained for 30 epochs with check-pointing; only the check-point with the best performance on the development set is kept for testing. We dynamically set the mini-batch size INLINEFORM0 as a function of the number of examples INLINEFORM1 such that the number of training iterations is roughly 300 per epoch (and also constant regardless of training data size); concretely, INLINEFORM2 . As a form of regularization, we apply dropout BIBREF14 at a rate of 50% on the hidden representations immediately after a Bi-LSTM or CNN composition. The outcome objectives are trained such that the gradients of the context encoder weights are downscaled by an order of magnitude (i.e., one tenth) to encourage learning at the later layers. When learning on the NER objective – the main branch of the network – the gradients are not downscaled in the same manner. Moreover, when training on the NER objective, we upweight the loss penalty on “relation” tags (non-O tags) by a factor of 10, which forces the model to prioritize differentiation between different types of interactions over span segmentation. We additionally upweight the loss penalty by a factor of 3 on Training-22 examples compared to NLM-180 examples. We optimize using the Adam BIBREF15 optimization method. These hyper-parameters were tuned during initial experiments." ], [ "In this section, we present and discuss the results of our cross-validation experiments. We then describe the “runs” that were submitted as challenge entries and present our official challenge results. We discuss these results in Section SECREF28 ." ], [ "We present the results of our initial experiments in Table TABREF20 . Evaluations were produced as as result of 11-fold cross-validation over Training-22 with two drug labels per fold. Instead of macro-averaging over folds, and thereby weighting each fold equally, we evaluate on the union of all 11 test-fold predictions.", "The upperbound in Table TABREF20 is produced by reducing Training-22 (with gold labels) to our sequence-tagging format and then reverting it back to the original official XML format. Lowered recall is mostly due to simplifying assumptions; e.g., we only consider non-overlapping mentions. For coordinated disjoint cases such as “X and Y inducers”, we only considered “Y inducers” in our simplifying assumption. Imperfect precision is due to discrepancies between the tokenization scheme used by our method and that used to produce gold annotations; this leads to the occasional mismatch in entity offsets during evaluation.", "Using a stacked Bi-LSTM trained on the original 22 training examples (Table TABREF20 ; row 1) as our baseline, we make the following observations. Incorporating NLM-180 resulted in a significant boost of more than 20 F1-points in relation extraction performance and more than 10 F1-points in NER performance (Table TABREF20 ; row 2), despite the lowered upperbound on NER recall as mentioned in Section SECREF5 . Adding character-CNN based word representations improved performance marginally, more so for NER than relation extraction (Table TABREF20 ; row 3). We also implemented several tweaks to the pre-processing and post-processing aspects of the model based on preliminary error analysis including (1) using drug class mentions (e.g., “diuretics”) as proxies if the drug label is not mentioned directly; (2) removing modifiers such as moderate, strong, and potent so that output conforms to official annotation guidelines; and (3) purging predicted mentions with only stopwords or generic terms such as “drugs” or “agents.” These tweaks improved performance by more than two F1-points across both metrics (Table TABREF20 ; row 4).", "Based on early experiments with simpler models tuned on relaxed matching (not shown in Table TABREF20 and not directly comparable to results displayed in Table TABREF20 ), we found that a stacked Bi-LSTM architecture improves over a single Bi-LSTM by approximately four F1-points on relation extraction (55.59% vs. 51.55% F1 tuned on the relaxed matching criteria). We moreover found that omitting word embeddings as input at the second Bi-LSTM results in worse performance at 52.91% F1.", "We also experimented with using Temporal Convolution Networks (TCNs) BIBREF16 as a “drop-in” replacement for Bi-LSTMs. Our attempts involved replacing only the second Bi-LSTM with a TCN (Table TABREF20 ; row 4) as well as replacing both Bi-LSTMs with TCNs (Table TABREF20 ; row 5). The results of these early experiments were not promising and further fine-tuning may be necessary for better performance." ], [ "Our final system submission is based on a stacked Bi-LSTM network with character-CNNs trained on both Training-22 and NLM-180 (corresponding to row 4 of Table TABREF20 ). We submitted the following three runs based on this architecture:", "A single model.", "", "An ensemble over ten models each trained with randomly initialized weights and a random development split. Intuitively, models collectively “vote” on predicted annotations that are kept and annotations that are discarded. A unique annotation (entity or relation) has one vote for each time it appears in one of the ten model prediction sets. In terms of implementation, unique annotations are incrementally added (to the final prediction set) in order of descending vote count; subsequent annotations that conflict (i.e., overlap based on character offsets) with existing annotations are discarded. Hence, we loosely refer to this approach as “voting-based” ensembling.", "", "A single model with pre/post-processing rules to handle modifier coordinations; for example, “X and Y inducers” would be correctly identified as two distinct entities corresponding to “X inducers” and “Y inducers.” Here, we essentially encoded “X and Y inducers” as a single entity when training the NER objective; during test time, we use simple rules based on pattern matching to split the joint “entity” into its constituents.", "", "Eight teams participated in task 1 while four teams participated in task 2. We record the relative performance of our system (among others in the top 5) on the two official test sets in Table TABREF24 . For each team, we only display the performance of the best run for a particular test set. Methods are grouped by the data used for training and ranked in ascending order of primary relation extraction performance followed by entity recognition performance. We also included a single model trained solely on Training-22, that was not submitted, for comparison. Our voting-based ensemble performed best among the three systems submitted by our team on both NER and relation extraction. In the official challenge, this model placed second overall on both NER and relation extraction.", "Tang et al. BIBREF20 boasts the top performing system on both tasks. In addition to Training-22 and NLM-180, the team trained and validated their models on a set of 1148 sentences sampled from DailyMed labels that were manually annotated according to official annotation guidelines. Hence, strictly speaking, their method is not directly comparable to ours given the significant difference in available training data." ], [ "While precision was similar between the three systems (with exceptions), we observed that our ensemble-based system benefited mostly from improved recall. This aligns with our initial expectation (based on prior experience with deep learning models) that an ensemble-based approach would improve stability and accuracy with deep neural models. Although including NLM-180 as training data resulted in significant performance gains during 11-fold cross validation, we find that the same improvements were not as dramatic on either test sets despite the 800% gain in training data. As such, we offer the following analysis. First, we suspect that there may be a semantic or annotation drift between these datasets as annotation guidelines evolve over time and as annotators become more experienced. To our knowledge, the datasets were annotated in the following order: NLM-180, Training-22, and finally Test Sets 1 and 2; moreover, Test Sets 1 and 2 were annotated by separate groups of annotators. Second, having few but higher quality examples may be more advantageous than having many but lower quality examples, at least for this particular task where evaluation is based on matching exact character offsets. Finally, we note that the top performing system exhibits superior performance on Test Set 1 compared to Test Set 2; interestingly, we observe an inverse of the scenario in our own system. This may be an indicator that our system struggles with data that is more “sparse” (as previously defined in Section SECREF2 )." ], [ "We presented a method for jointly extracting precipitants and their interaction types as part of a multi-task framework that additionally detects interaction outcome. Among three “runs”, a ten model voting-ensemble was our best performer. In future efforts, we will experiment with Graph Convolution Networks BIBREF21 over dependency trees as a “drop-in” replace for Bi-LSTMs to assess its suitability for this task." ], [ "This research was conducted during TT's participation in the Lister Hill National Center for Biomedical Communications (LHNCBC) Research Program in Medical Informatics for Graduate students at the U.S. National Library of Medicine, National Institutes of Health. HK is supported by the intramural research program at the U.S. National Library of Medicine, National Institutes of Health. RK and TT are also supported by the U.S. National Library of Medicine through grant R21LM012274." ] ], "section_name": [ "Introduction", "Materials and Methods", "Datasets", "Evaluation Metrics", "Methodology", "Results and Discussion", "Validation Results", "Official Test Results", "Discussion", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "4b637c3e214b64e96036644ab3eec3bbe4c98e77" ], "answer": [ { "evidence": [ "Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively.", "FLOAT SELECTED: Table 1: Characteristics of datasets" ], "extractive_spans": [], "free_form_answer": "Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences", "highlighted_evidence": [ "Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems.", "We provide summary statistics about these datasets in Table TABREF3 . ", "FLOAT SELECTED: Table 1: Characteristics of datasets" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "d066d02f03fce1dfed6f160f160c1d80175a5cdb" ], "answer": [ { "evidence": [ "Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively." ], "extractive_spans": [ "Training-22", "NLM-180" ], "free_form_answer": "", "highlighted_evidence": [ "The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. ", "As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "What were the sizes of the test sets?", "What training data did they use?" ], "question_id": [ "4a4616e1a9807f32cca9b92ab05e65b05c2a1bf5", "3752bbc5367973ab5b839ded08c57f51336b5c3d" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 1: An example illustrating the DDI task", "Table 1: Characteristics of datasets", "Table 2: Example of the tagging scheme", "Figure 2: The multi-task neural network for DDI extraction", "Table 3: Preliminary results based on 11-fold cross validation over Training-22 with two held-out drug labels per fold. When NLM-180 is incorporated, the training data used for each fold consists of 20 non-held out drug labels from Training-22 and all 180 drug labels from NLM-180.", "Table 4: Comparison of our method with that of other teams in the top 5. Only the best performing method of each team is shown; methods are grouped by available training data and ranked in ascending order by relation extraction (primary) performance followed by entity recognition performance. 1This model was not submitted and is shown for reference only 2HS refers to a private dataset of 1148 sentences manually-annotated by Tang et al. [21] according to official guidelines" ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "3-Table2-1.png", "5-Figure2-1.png", "8-Table3-1.png", "9-Table4-1.png" ] }
[ "What were the sizes of the test sets?" ]
[ [ "1905.07464-3-Table1-1.png", "1905.07464-Datasets-0" ] ]
[ "Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences" ]
553
1901.09755
Language Independent Sequence Labelling for Opinion Target Extraction
In this research note we present a language independent system to model Opinion Target Extraction (OTE) as a sequence labelling task. The system consists of a combination of clustering features implemented on top of a simple set of shallow local features. Experiments on the well known Aspect Based Sentiment Analysis (ABSA) benchmarks show that our approach is very competitive across languages, obtaining best results for six languages in seven different datasets. Furthermore, the results provide further insights into the behaviour of clustering features for sequence labelling tasks. The system and models generated in this work are available for public use and to facilitate reproducibility of results.
{ "paragraphs": [ [ "Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods.", "Early approaches to OMSA were based on document classification, where the task was to determine the polarity (positive, negative, neutral) of a given document or review BIBREF0 , BIBREF1 . A well known benchmark for polarity classification at document level is that of BIBREF2 . Later on, a finer-grained OMSA was deemed necessary. This was motivated by the fact that in a given review more than one opinion about a variety of aspects or attributes of a given product is usually conveyed. Thus, Aspect Based Sentiment Analysis (ABSA) was defined as a task which consisted of identifying several components of a given opinion: the opinion holder, the target, the opinion expression (the textual expression conveying polarity) and the aspects or features. Aspects are mostly domain-dependent. In restaurant reviews, relevant aspects would include “food quality”, “price”, “service”, “restaurant ambience”, etc. Similarly, if the reviews were about consumer electronics such as laptops, then aspects would include “size”, “battery life”, “hard drive capacity”, etc.", "In the review shown by Figure FIGREF1 there are three different opinions about two different aspects (categories) of the restaurant, namely, the first two opinions are about the quality of the food and the third one about the general ambience of the place. Furthermore, there are just two opinion targets because the target of the third opinion, the restaurant itself, remains implicit. Finally, each aspect is assigned a polarity; in this case all three opinion aspects are negative.", "In this work we focus on Opinion Target Extraction, which we model as a sequence labelling task. In order to do so, we convert an annotated review such as the one in Figure FIGREF1 into the BIO scheme for learning sequence labelling models BIBREF3 . Example (1) shows the review in BIO format. Tokens in the review are tagged depending on whether they are at the beginning (B-target), inside (I-target) or outside (O) of the opinion target expression. Note that the third opinion target in Figure FIGREF1 is implicit.", "We learn language independent models which consist of a set of local, shallow features complemented with semantic distributional features based on clusters obtained from a variety of data sources. We show that our approach, despite the lack of hand-engineered, language-specific features, obtains state-of-the-art results in 7 datasets for 6 languages on the ABSA benchmarks BIBREF4 , BIBREF5 , BIBREF6 .", "The main contribution of this research note is providing an extension or addendum to previous work on sequence labelling BIBREF7 by reporting additional experimental results as well as further insights on the performance of our model across languages on a different NLP task such as Opinion Target Extraction (OTE). Thus, we empirically demonstrate the validity and strong performance of our approach for six languages in seven different datasets of the restaurant domain. Every experiment and result presented in this note is novel.", "In this sense, we show that our approach is not only competitive across languages and domains for Named Entity Recognition, as shown by BIBREF7 , but that it can be straightforwardly adapted to different tasks and domains such as OTE. Furthermore, we release the system and every model trained for public use and to facilitate reproducibility of results." ], [ "Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing.", "Closer to our work, BIBREF13 , BIBREF14 and BIBREF15 approached OTE as a sequence labelling task, modelling the opinion targets using the BIO scheme. The first approach implemented HMM whereas the last two proposed CRFs to solve the problem. In all three cases, their systems included extensive human-designed and linguistically motivated features, such as POS tags, lemmas, dependencies, constituent parsing structure, lexical patterns and semantic features extracted from WordNet BIBREF16 .", "Quite frequently these works used a third party dataset, or a subset of the original one, or created their own annotated data for their experiments. The result was that it was difficult to draw precise conclusions about the advantages or disadvantages of the proposed methods. In this context, the Aspect Based Sentiment Analysis (ABSA) tasks at SemEval BIBREF4 , BIBREF5 , BIBREF6 provided standard training and evaluation data thereby helping to establish a clear benchmark for the OTE task.", "Finally, it should be noted that there is a closely related task, namely, the SemEval 2016 task on Stance Detection. Stance detection is related to ABSA, but there is a significant difference. In ABSA the task is to determine whether a piece of text is positive, negative, or neutral with respect to an aspect and a given target (which in Stance Detection is called “author's favorability” towards a given target). However, in Stance Detection the text may express opinion or sentiment about some other target, not mentioned in the given text, and the targets are predefined, whereas in ABSA the targets are open-ended." ], [ "Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain.", "The ABSA task consisted of identifying, for each opinion, the opinion target, the aspect referred to by the opinion and the aspect's polarity. Figure FIGREF1 illustrates the original annotation of a restaurant review in the ABSA 2016 dataset. It should be noted that, out of the three opinion components, only the targets are explicitly represented in the text, which means that OTE can be independently modelled as a sequence labelling problem as shown by Example (1). It is particularly important to notice that the opinion expressions (“dry”, “greasy”, “loud and rude”) are not annotated.", "Following previous approaches, the first competitive systems for OTE at ABSA were supervised. Among the participants (for English) in the three editions, one team BIBREF17 , BIBREF18 was particularly successful. For ABSA 2014 and 2015 they developed a CRF system with extensive handcrafted linguistic features: POS, head word, dependency relations, WordNet relations, gazetteers and Name Lists based on applying the Double Propagation algorithm BIBREF12 on an initial list of 551 seeds. Interestingly, they also introduced word representation features based on Brown and K-mean clusters. For ABSA 2016, they improved their system by using the output of a Recurrent Neural Network (RNN) to provide additional features. The RNN is trained on the following input features: word embeddings, Name Lists and word clusters BIBREF19 . They were the best system in 2014 and 2016. In 2015 they obtained the second best result, in which the best system, a preliminary version of the one presented in this note, was submitted by the EliXa team BIBREF20 .", "From 2015 onwards most works have been based on deep learning. BIBREF21 applied RNNs on top of a variety of pre-trained word embeddings, while BIBREF22 presented an architecture in which a RNN based tagger is stacked on top of the features generated by a Convolutional Neural Network (CNN). These systems were evaluated on the 2014 and 2015 datasets, respectively, but they did not go beyond the state-of-the-art.", " BIBREF23 presented a 7 layer deep CNN combining word embeddings trained on a INLINEFORM0 5 billion word corpus extracted from Amazon BIBREF24 , POS tag features and manually developed linguistic patterns based on syntactic analysis and SenticNet BIBREF25 a concept-level knowledge based build for Sentiment Analysis applications. They only evaluate their system on the English 2014 ABSA data, obtaining best results up to date on that benchmark.", "More recently, BIBREF26 proposed a coupled multi-layer attention (CMLA) network where each layer consists of a couple of attentions with tensor operators. Unlike previous approaches, their system does not use complex linguistic-based features designed for one specific language. However, whereas previous successful approaches modelled OTE as an independent task, in the CMLA model the attentions interactively learn both the opinion targets and the opinion expressions. As opinion expressions are not available in the original ABSA datasets, they had to manually annotate the ABSA training and testing data with the required opinion expressions. Although BIBREF26 did not release the datasets with the annotated opinion expressions, Figure FIGREF5 illustrates what these annotations would look like. Thus, two new attributes (pfrom and pto) annotate the opinion expressions for each of the three opinions (“dry”, “greasy” and “loud and rude”, respectively). Using this new manual information to train their CMLA network they reported the best results so far for ABSA 2014 and 2015 (English only).", "Finally, BIBREF27 develop a multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations. As BIBREF26 , they use opinion expressions annotations for a joint modelling of opinion targets and expressions. However, unlike BIBREF26 they do not manually annotate the opinion expressions. Instead they manually add sentiment lexicons and rules based on dependency parsing in order to find the opinion words required to train their system. Using this hand-engineered system, they report state of the art results only for English on the ABSA 2016 dataset. They do not provide evaluation results on the 2014 and 2015 restaurant datasets.", "With respect to other languages, the IIT-T team presented systems for 4 out of the 7 languages in ABSA 2016, obtaining the best score for French and Dutch, second in Spanish but with very poor results for English, well below the baseline. The GTI team BIBREF28 implemented a CRF system using POS, lemmas and bigrams as features. They obtained the best result for Spanish and rather modest results for English.", "Summarizing, the most successful systems for OTE have been based on supervised approaches with rather elaborate, complex and linguistically inspired features. BIBREF23 obtains best results on the ABSA 2014 data by means of a CNN with word embeddings trained on 5 billion words from Amazon, POS features, manual patterns based on syntactic analysis and SenticNet. More recently, the CMLA deep learning model has established new state-of-the-art results for the 2015 dataset, whereas BIBREF27 provide the state of the art for the 2016 benchmark. Thus, there is not currently a multilingual system that obtains competitive results across (at least) several of the languages included in ABSA.", "As usual, most of the work has been done for English, with the large majority of the previous systems providing results only for one of the three English ABSA editions and without exploring the multilingual aspect. This could be due to the complex and language-specific systems that performed best for English BIBREF23 , or perhaps because the CMLA approach of BIBREF26 would require, in addition to the opinion targets, the gold standard annotations of the opinion expressions for each of the 6 languages other than English in the ABSA datasets." ], [ "The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used." ], [ "Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.", "Additionally, we think it is also interesting to note the low number of targets that are multiwords. To provide a couple of examples, for Spanish only the %35.59 of the targets are multiwords whereas for Dutch the percentage goes down to %25.68. If we compare these numbers with the CoNLL 2002 data for Named Entity Recognition (NER), a classic sequence labelling task, we find that in the ABSA data there is less than half the number of multiword targets than the number of multiword entities that can be found in the CoNLL Spanish and Dutch data (%35.59 vs %74.33 for Spanish and %25.68 vs %44.96 for Dutch)." ], [ "Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.", "In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 .", "The number of words used for each dataset, language and cluster type are described in Table TABREF9 . For example, the first row reads “Yelp Academic Dataset containing 225M words was used; after pre-processing, 156M words were taken to induce Brown clusters, whereas Clark and Word2vec clusters were trained on the whole corpus”. As explained in BIBREF7 , we pre-process the corpus before training Brown clusters, resulting in a smaller dataset than the original. Additionally, due to efficiency reasons, when the corpus is too large we use the pre-processed version to induce the Clark clusters." ], [ "We use the sequence labeller implemented within IXA pipes BIBREF7 . It learns supervised models based on the Perceptron algorithm BIBREF31 . To avoid duplication of efforts, it uses the Apache OpenNLP project implementation customized with its own features. By design, the sequence labeller aims to establish a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations and/or cascading errors across annotations.", "The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm.", "The clustering features look for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then the class is added as feature (“not found” otherwise). As we work on a 5 token window, for each token and clustering lexicon at least 5 features are generated. For Brown, the number of features generated depend on the number of nodes found in the path for each token and clustering lexicon used.", "Figure FIGREF13 depicts how our system relates, via clusters, unseen words with those words that have been seen as targets during the training process. Thus, the tokens `french-onions' and `salmon' would be annotated as opinion targets because they occur in the same clusters as seen words which in the training data are labeled as targets.", "The word representation features are combined and stacked using the clustering lexicons induced over the different data sources listed in Table TABREF9 . In other words, stacking means adding various clustering features of the same type obtained from different data sources (for example, using clusters trained on Yelp and on Wikipedia); combining refers to combining different types of clustering features obtained from the same data source (e.g., using features from Brown and Clark clustering lexicons).", "To choose the best combination of clustering features we tried, via 5-fold cross validation on the training set, every possible permutation of the available Clark and Word2vec clustering lexicons obtained from the data sources. Once the best combination of Clark and Word2vec clustering lexicons per data source was found, we tried to combine them with the Brown clusters. The result is a rather simple but very competitive system that has proven to be highly successful in the most popular Named Entity Recognition and Classification (NER) benchmarks, both in out-of-domain and in-domain evaluations. Furthermore, it was demonstrated that the system also performed robustly across languages without any language-specific tuning. Details of the system's implementation, including detailed description of the local and clustering features, can be found in BIBREF7 , including a section on how to combine the clustering features.", "A preliminary version of this system BIBREF20 was the winner of the OTE sub-task in the ABSA 2015 edition (English only). In the next section we show that this system obtains state-of-the-art results not only across domains and languages for NER, but also for other tasks such as Opinion Target Extraction. The results reported are obtained using the official ABSA evaluation scripts BIBREF4 , BIBREF5 , BIBREF6 ." ], [ "In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset." ], [ "Table TABREF16 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively.", "The first noteworthy issue is that the same model obtains the best results on the three English datasets. Second, it is also interesting to note the huge gains obtained by the clustering features, between 6-7 points in F1 score across the three ABSA datasets. Third, the results show that the combination of clustering features induced from different data sources is crucial. Fourth, the clustering features improve the recall by 12-15 points in the 2015 and 2016 data, and around 7 points for 2014. Finally, while in 2014 the precision also increases, in the 2015 setting it degrades almost by 4 points in F1 score.", "Table TABREF17 compares our results with previous work. MIN refers to the multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations with manually developed rules for detecting opinion expressions BIBREF27 . CNN-SenticNet is the 7 layer CNN with Amazon word embeddings, POS, linguistic rules based on syntax patterns and SenticNet BIBREF23 .", "LSTM is a Long Short Term Memory neural network built on top of word embeddings as proposed by BIBREF21 . WDEmb BIBREF35 uses word and dependency path, linear context and dependency context embedding features the input to a CRF. RNCRF is a joint model with CRF and a recursive neural network whereas CMLA is the Coupled Multilayer Attentions model described in section SECREF4 , both systems proposed by BIBREF26 . DLIREC-NLANGP is the winning system at ABSA 2014 and 2016 BIBREF17 , BIBREF18 , BIBREF19 while the penultimate row refers to our own system for all the three benchmarks (details in Table TABREF16 ).", "The results of Table TABREF17 show that our system, despite its simplicity, is highly competitive, obtaining the best results on the 2015 and 2016 datasets and a competitive performance on the 2014 benchmark. In particular, we outperform much more complex and language-specific approaches tuned via language-specific features, such as that of DLIREC-NLANGP. Furthermore, while the deep learning approaches (enriched with human-engineered linguistic features) obtain comparable or better results on the 2014 data, that is not the case for the 2015 and 2016 benchmarks, where our system outperforms also the MIN and CMLA models (systems which require manually added rules and gold-standard opinion expressions to obtain their best results, as explained in section SECREF4 ). In this sense, this means that our system obtains better results than MIN and CMLA by learning the targets independently instead of jointly learning the target and those expressions that convey the polarity of the opinion, namely, the opinion expression.", "There seems to be also a correlation between the size of the datasets and performance, given that the results on the 2014 data are much higher than those obtained using the 2015 and 2016 datasets. This might be due to the fact that the 2014 training set is substantially larger, as detailed in Table TABREF7 . In fact, the smaller datasets seem to affect more the deep learning approaches (LSTM, WDEmb, RNCRF) where only the MIN and CMLA models obtain similar results to ours, albeit using manually added language-specific annotations.", "Finally, it would have been interesting to compare MIN, CNN-SenticNet and CMLA with our system on the three ABSA benchmarks, but their systems are not publicly available." ], [ "We trained our system for 5 other languages on the ABSA 2016 datasets, using the same strategy as for English. We choose the best Clark-Word2vec combination (with and without Brown clusters) via 5-cross validation on the training data. The features are exactly the same as those used for English, the only change is the data on which the clusters are trained. Table TABREF19 reports on the detailed results obtained for each of the languages. In bold we show the best model chosen via 5-fold CV. Moreover, we also show the best models using only one of each of the clustering features.", "The first difference with respect to the English results is that the Brown clustering features are, in three out of five settings, detrimental to performance. Second, that combining clustering features is only beneficial for Spanish. Third, the overall results are in general lower than those obtained in the 2016 English data. Finally, the difference between the best results and the results using the Local features is lower than for English, even though the Local results are similar to those obtained with the English datasets (except for Turkish, but this is due to the significantly smaller size of the data, as shown in Table TABREF7 ).", "We believe that all these four issues are caused, at least partially, by the lack of domain-specific clustering features used for the multilingual experiments. In other words, while for the English experiments we leveraged the Yelp dataset to train the clustering algorithms, in the multilingual setting we first tried with already available clusters induced from the Wikipedia. Thus, it is to be expected that the gains obtained by clustering features obtained from domain-specific data such as Yelp would be superior to those achieved by the clusters trained on out-of-domain data.", "In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 ." ], [ "Considering the simplicity of our approach, we obtain best results for 6 languages and 7 different settings in the Opinion Target Extraction (OTE) benchmark for the restaurant domain using the ABSA 2014-2016 datasets.", "These results are obtained without linguistic or manually-engineered features, relying on injecting external knowledge from the combination of clustering features to obtain a robust system across languages, outperforming other more complex and language-specific systems. Furthermore, the feature set used is the same for every setting, reducing human intervention to a minimum and establishing a clear methodology for a fast and easy creation of competitive OTE multilingual taggers.", "The results also confirm the behaviour of these clustering algorithms to provide features for sequence labelling tasks such as OTE and Named Entity Recognition (NER), as previously discussed in BIBREF7 . Thus, in every evaluation setting the best results using Brown clusters as features were obtained when data close to the application domain and text genre, even if relatively small, was used to train the Brown algorithm. This can be clearly seen if we compare the English with the multilingual results. For English, the models including Brown clusters improve the Local features over 3-5 points in F1 score, whereas for Spanish, Dutch and Russian, they worsen performance. The reason is that for English the Yelp dataset is used whereas for the rest of languages the clusters are induced using the Wikipedia, effectively an out-of-domain corpus. The exception is Turkish, for which a 6 point gain in F1 score is obtained, but we believe that is probably due to the small size of the training data used for training the Local model.", "In contrast, Word2vec clusters clearly benefit from larger amounts of data, as illustrated by the best English Word2vec model being the one trained using the Wikipedia, and not the Yelp dataset, which is closer to the application domain. Finally, the Clark algorithm seems to be the most versatile as it consistently outperforms the other two clustering methods in 4 out of the 8 evaluation settings presented.", "Summarizing: (i) Brown clusters perform better when leveraged from source data close to the application domain, even if small in size; (ii) Clark clusters are the most robust of the three with respect to the size and domain of the data used; and (iii) for Word2vec size is the crucial factor. The larger the source data the better the performance. Thus, instead of choosing over one clustering type or the other, our system provides a method to effectively combining them, depending on the data sources available, to obtain robust and language independent sequence labelling systems.", "Finally, results show that our models are particularly competitive when the amount of training data available is small, allowing us to compete with more complex systems including also manually-engineered features, as shown especially by the English results on the 2015 and 2016 data." ], [ "We will now discuss the shortcomings and most common errors performed by our system for the OTE task. By looking at the overall results in terms of precision and recall, it is possible to see the following patterns: With respect to the Local models, precision is consistently better than recall or, in other words, the coverage of the Local models is quite low. Tables TABREF16 and TABREF19 show that adding clustering features to the Local models allows to improve the recall for every evaluation setting, although with different outcomes. Overall, precision suffers, except for French. Furthermore, in three cases (English 2014, 2016 and Russian) precision is lower than recall, whereas the remaining 5 evaluations show that, despite large improvements in F1 score, most errors in our system are caused by false negatives, as it can be seen in Table TABREF23 .", "Table TABREF25 displays the top 5 most common false positives and false negative errors for English, Spanish and French. By inspecting our system's output, and both the test and training sets, we found out that there were three main sources of errors: (a) errors caused by ambiguity in the use of certain source forms that may or may not refer to an opinion target; (b) span errors, where the target has only been partially annotated; and (c) unknown targets, which the system was unable to annotate by generalizing on the training data or clusters.", "With respect to type (a), it is useful to look at the most common errors for all three languages, namely, `place', `food' and `restaurant', which are also among the top 5 most frequent targets in the gold standard sets. By looking at Examples (1-3) we would say that in all three cases `place' should be annotated as opinion target. However, (2) is a false positive (FP), (3) is a false negative (FN) and (1) is an example from the training set in which `place' is annotated as target. This is the case with many instances of `place' for which there seems to be some inconsistency in the actual annotation of the training and test set examples.", "", "Example (1): Avoid this place!", "Example (2): this place is a keeper!", "Example (3): it is great place to watch sporting events.", "For other frequent type (a) errors, ambiguity is the main problem. Thus, in Spanish the use of `comida' and `restaurante' is highly ambiguous and causes many FPs and FNs because sometimes it is actually an opinion target whereas in many other other cases it is just referring to the meal or the restaurant themselves without expressing any opinion about them. The same phenomenon occurs for “food” and “restaurant” in English and for `cuisine' and `restaurant' in French.", "Span type (b) errors are typically caused by long opinion targets such as “filet mignon on top of spinach and mashed potatoes” for which our system annotates “filet” and “spinach” as separate targets, or “chicken curry and chicken tikka masala” which is wrongly tagged as one target. These cases are difficult because on the surface they look similar but the first one refers to one dish only, hence one target, whereas the second one refers to two separate dishes for which two different opinion targets should be annotated. Of course, these cases are particularly hurtful because they count as both FP and FN.", "Finally, type (c) errors are usually caused by lack of generalization of our system to deal with unknown targets. Example (4-7) contain various mentions to the “Ray's” restaurant, which is in the top 5 errors for the English 2016 test set.", "", "Example (4): After 12 years in Seattle Ray's rates as the place we always go back to.", "Example (5): We were only in Seattle for one night and I'm so glad we picked Rays for dinner!", "Example (6): I love Dungeness crabs and at Ray's you can get them served in about 6 different ways!", "Example (7): Imagine my happy surprise upon finding that the views are only the third-best thing about Ray's!", "Example (8): Ray's is something of a Seattle institution", "Examples (4), (5) and (7) are FNs, (6) is a FP caused by wrongly identifying the target as “Ray's you”, whereas (8) is not event annotated in the gold standard or by our system, although it should had been." ], [ "In this research note we provide additional empirical experimentation to BIBREF7 , reporting best results for Opinion Target Extraction for 6 languages and 7 datasets using the same set of simple, shallow and language independent features. Furthermore, the results provide some interesting insights with respect to the use of clusters to inject external knowledge via semi-supervised features.", "First, Brown clusters are particularly beneficial when trained on domain-related data. This seems to be the case in the multilingual setting, where the Brown clusters (trained on out-of-domain Wikipedia data) worsen the system's performance for every language except for Turkish.", "Second, the results also show that Clark and Word2vec improve results in general, even if induced on out-of-domain data. Thirdly, for best performance it is convenient to combine clusters obtained from diverse data sources, both from in- and out-of-domain corpora.", "Finally, the results indicate that, even when the amount of training data is small, such as in the 2015 and 2016 English benchmarks, our system's performance remains competitive thanks to the combination of clustering features. This, together with the lack of linguistic features, facilitates the easy and fast development of systems for new domains or languages. These considerations thus confirm the hypotheses stated in BIBREF7 with respect to the use of clustering features to obtain robust sequence taggers across languages and tasks.", "The system and models for every language and dataset are available as part of the ixa-pipe-opinion module for public use and reproducibility of results." ], [ "First, we would like to thank the anonymous reviewers for their comments to improve the paper. We would also like to thank Iñaki San Vicente for his help obtaining the Yelp data. This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE), under the projects TUNER (TIN2015-65308-C5-1-R) and CROSSTEXT (TIN2015-72646-EXP)." ] ], "section_name": [ "Introduction", "Background", "ABSA Tasks at SemEval", "Methodology", "ABSA Datasets", "Unlabelled Corpora", "System", "Experimental Results", "English", "Multilingual", "Discussion and Error Analysis", "Error Analysis", "Concluding Remarks", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "751287ba1c4e73b9590faaddd00cf51270c324d8" ], "answer": [ { "evidence": [ "Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing.", "In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 ." ], "extractive_spans": [ "the baseline provided by BIBREF8", "the baselines provided by the ABSA organizers" ], "free_form_answer": "", "highlighted_evidence": [ "Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing.", "n spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "ea73820fcbbb451baa23441e5fc96358e1e3ae91" ], "answer": [ { "evidence": [ "Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.", "FLOAT SELECTED: Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.", "Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.", "In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 ." ], "extractive_spans": [], "free_form_answer": "ABSA SemEval 2014-2016 datasets\nYelp Academic Dataset\nWikipedia dumps", "highlighted_evidence": [ "Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.", "FLOAT SELECTED: Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.", "Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.\n\nIn order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "91ea275060a44b2fa896afa9991282a4390606b4" ], "answer": [ { "evidence": [ "In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset." ], "extractive_spans": [ "Dutch", "French", "Russian", "Spanish ", "Turkish", "English " ], "free_form_answer": "", "highlighted_evidence": [ "In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "4ba557d2e098ec11194e14d86c4e95f2122d04b4" ], "answer": [ { "evidence": [ "The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm." ], "extractive_spans": [ " Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context" ], "free_form_answer": "", "highlighted_evidence": [ "The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "What was the baseline?", "Which datasets are used?", "Which six languages are experimented with?", "What shallow local features are extracted?" ], "question_id": [ "71fca845edd33f6e227eccde10db73b99a7e157b", "93b299acfb6fad104b9ebf4d0585d42de4047051", "e755fb599690d0d0c12ddb851ac731a0a7965797", "7e51490a362135267e75b2817de3c38dfe846e21" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.", "Table 2: Unlabeled corpora to induce clusters. For each corpus and cluster type the number of words (in millions) is specified. Average training times: depending on the number of words, Brown clusters training time required between 5h and 48h. Word2vec required 1-4 hours whereas Clark clusters training lasted between 5 hours and 10 days.", "Figure 3: Unigram matching in clustering features.", "Table 3: ABSA SemEval 2014-2016 English results. BY: Brown Yelp 1000 classes; CYF100-CYR200: Clark Yelp Food 100 classes and Clark Yelp Reviews 200 classes; W2VW400: Word2vec Wikipedia 400 classes; ALL: BY+CYF100-CYR200+W2VW400.", "Table 4: ABSA SemEval 2014-2016: Comparison of English results in terms of F1 scores; ∗ refers to models enriched with human-engineered linguistic features.", "Table 5: ABSA SemEval 2016 multilingual results.", "Table 6: ABSA SemEval 2016: Comparison of multilingual results in terms of F1 scores.", "Table 7: False Positives and Negatives for every ABSA 2014-2016 setting.", "Table 8: Top five false positive (FP) and negative (FN) errors for English, Spanish and French." ], "file": [ "6-Table1-1.png", "7-Table2-1.png", "8-Figure3-1.png", "10-Table3-1.png", "10-Table4-1.png", "11-Table5-1.png", "12-Table6-1.png", "14-Table7-1.png", "14-Table8-1.png" ] }
[ "Which datasets are used?" ]
[ [ "1901.09755-6-Table1-1.png", "1901.09755-Unlabelled Corpora-1", "1901.09755-Unlabelled Corpora-0", "1901.09755-ABSA Datasets-0" ] ]
[ "ABSA SemEval 2014-2016 datasets\nYelp Academic Dataset\nWikipedia dumps" ]
556
2002.05829
HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing
Computation-intensive pretrained models have been taking the lead of many natural language processing benchmarks such as GLUE. However, energy efficiency in the process of model training and inference becomes a critical bottleneck. We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible natural language processing. With HULK, we compare pretrained models' energy efficiency from the perspectives of time and cost. Baseline benchmarking results are provided for further analysis. The fine-tuning efficiency of different pretrained models can differ a lot among different tasks and fewer parameter number does not necessarily imply better efficiency. We analyzed such phenomenon and demonstrate the method of comparing the multi-task efficiency of pretrained models. Our platform is available at https://sites.engineering.ucsb.edu/~xiyou/hulk/.
{ "paragraphs": [ [ "Environmental concerns of machine learning research has been rising as the carbon emission of certain tasks like neural architecture search reached an exceptional “ocean boiling” level BIBREF7. Increased carbon emission has been one of the key factors to aggravate global warming . Research and development process like parameter search further increase the environment impact. When using cloud-based machines, the environment impact is strongly correlated with budget.", "The recent emergence of leaderboards such as SQuAD BIBREF8, GLUE BIBREF0 and SuperGLUE BIBREF9 has greatly boosted the development of advanced models in the NLP community. Pretrained models have proven to be the key ingredient for achieving state of the art in conventional metrics. However, such models can be extremely expensive to train. For example, XLNet-Large BIBREF2 was trained on 512 TPU v3 chips for 500K steps, which costs around 61,440 dollars, let alone staggeringly large carbon emission.", "Moreover, despite impressive performance gain, the fine-tuning and inference efficiency of NLP models remain under-explored. As recently mentioned in a tweet, the popular AI text adventure game AI Dungeon has reached 100 million inferences. The energy efficiency of inference cost could be critical to both business planning and environment impact.", "Previous work BIBREF10, BIBREF11 on this topic proposed new metrics like FPO (floating point operations) and new practice to report experimental results based on computing budget. Other benchmarks like BIBREF12 and BIBREF13 compares the efficiency of models on the classic reading comprehension task SQuAD and machine translation tasks. However, there has not been a concrete or practical reference for accurate estimation on NLP model pretraining, fine-tunning and inference considering multi-task energy efficiency.", "Energy efficiency can be reflected in many metrics including carbon emission, electricity usage, time consumption, number of parameters and FPO as shown in BIBREF10. Carbon emission and electricity are intuitive measures yet either hard to track or hardware-dependent. Number of parameteres does not reflect the acutal cost for model training and inference. FPO is steady for models but cannot be directly used for cost estimation. Here in order to provide a practical reference for model selection for real applications, especially model development outside of academia, we keep track of the time consumption and acutal budget for comparison. Cloud based machines are employed for cost estimation as they are easily accessible and consistent in hardware configuration and performance. In the following sections, we would use time and cost to denote the time elapsed and the acutal budget in model pretraining / training / inference.", "In most NLP pretrained model setting, there are three phases: pretraining, fine-tuning and inference. If a model is trained from scratch, we consider such model has no pretraining phase but fine-tuned from scratch. Typically pretraining takes several days and hundreds of dollars, according to Table TABREF1. Fine-tuning takes a few minutes to hours, costing a lot less than pretraining phase. Inference takes several milli-seconds to seconds, costing much less than fine-tuning phase. Meanwhile, pretraining is done before fine-tuning once for all, while fine-tuning could be performed multiple times as training data updates. Inference is expected to be called numerous times for downstream applications. Such characteristics make it an intuitive choice to separate different phases during benchmarking.", "Our Hulk benchmark, as shown in Figure FIGREF5, utilizes several classic datasets that have been widely adopted in the community as benchmarking tasks to benchmark energy efficiency and compares pretrained models in a multi-task fashion. The tasks include natural language inference task MNLI BIBREF14, sentiment analysis task SST-2 BIBREF15 and Named Entity Recognition Task CoNLL-2003 BIBREF16. Such tasks are selected to provide a thourough comparison of end-to-end energy efficiency in pretraining, fine-tuning and inference.", "With the Hulk benchmark, we quantify the energy efficiency of model pretraining, fine-tuning and inference phase by comparing the time and cost they require to reach certain overall task-specific performance level on selected datasets. The design principle and benchmarking process are detailed in section SECREF2. We also explore the relation between model parameter and fine-tuning efficiency and demonstrate consistency of energy efficiency between tasks for different pretrained models." ], [ "For pretraining phase, the benchmark is designed to favor energy efficient models in terms of time and cost that each model takes to reach certain multi-task performance pretrained from scratch. For example, we keep track of the time and cost of a BERT model pretrained from scratch. After every thousand of pretraining steps, we clone the model for fine-tuning and see if the final performance can reach our cut-off level. When the level is reached, time and cost for pretraining is used for comparison. Models faster or cheaper to pretrain are recommended.", "For fine-tuning phase, we consider the time and cost each model requires to reach certain multi-task performance fine-tuned from given pretrained models because for each single task with different difficulty and instance number, the fine-tuning characteristics may differ a lot. When pretrained models are used to deal with non-standard downstream task, especially ad hoc application in industry, the training set's difficulty cannot be accurately estimated. Therefore, it's important to compare the multi-task efficiency for model choice.", "For inference phase, the time and cost of each model making inference for single instance on multiple tasks are considered in the similar fashion as the fine-tuning phase." ], [ "The datasets we used are widely adopted in NLP community. Quantitative details of datasets can be found in Table TABREF7. The selected tasks are shown below:", "leftmargin=15pt,labelindent=15pt [enumerate]wide=0pt, leftmargin=15pt, labelwidth=15pt, align=left", "CoNLL 2003 The Conference on Computational Natural Language Learning (CoNLL-2003) shared task concerns language-independent named entity recognition BIBREF16. The task concentrates on four types of named entities: persons, locations, organizations and other miscellaneous entities. Here we only use the English dataset. The English data is a collection of news wire articles from the Reuters Corpus. Result is reflected as F1 score considering the label accuracy and recall on dev set.", "MNLI The Multi-Genre Natural Language Inference Corpus BIBREF14 is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The accuracy score is reported as the average of performance on matched and mismatched dev sets.", "SST-2 The Stanford Sentiment Treebank BIBREF15 consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. Following the setting of GLUE, we also use the two-way (positive/negative) class split, and use only sentence-level labels.", "The tasks are selected based on how representitve the dataset is. CoNLL 2003 has been a widely used dataset for named entity recognition and acutally requires output of token level labeling. NER is a core NLP task and CoNLL 2003 has been a classic dataset in this area. SST-2 and MNLI are part of the GLUE benchmark, representing sentence level labeling tasks. SST-2 has been frequently used in sentiment analysis across different generations of models. MNLI is a newly introduced large dataset for natural language inference. The training time for MNLI is relatively long and the task requires a lot more training instances. We select the three tasks for a diverse yet practical benchmark for pretrained models without constrain the models to sentence level classification tasks. In addition, their efficiency differ significantly in the fine-tuning and inference phase. Such difference can still be reflected on the final score after normalization as shown in Table TABREF8. Provided with more computing resource , we can bring in more datasets for even more thorough benchmarking in the furture. We illustrate the evaluation criteria in the following subsection." ], [ "In machine learning model training and inference, slight parameter change can have subtle impact on the final result. In order to make a practical reference for pretrained model selection, we compare models' end-to-end performance with respect to the pretraining time, pretraining cost, training time, training cost, inference time, infernce latency and cost following the setting of BIBREF12.", "For pretraining phase, we design the process to explore how much computing resource is required to reach certain multi-task performance by fine-tuning after the pretraining. Therefore, during model pretraining, after a number of steps, we use the half-pretrained model for fine-tuning and see if the fine-tuned model can reach our cut-off performance. When it does, we count the time and cost in the pretraining process for benchmarking and analysis.", "For fine-tuning phase, we want to compare the general efficiency of pretrained model reaching cut-off performance on selected dataset. During fine-tuning, we evaluate the half-fine-tuned model on development set after a certain number of steps. When the performance reach our cut-off performance, we count the time and cost in this fine-tuning process for benchmarking and analysis. To be specific, for a single pretrained model, the efficiency score on different tasks is defined as the sum of normalized time and cost. Here we normalize the time and cost because they vary dramatically between tasks. In order to simplify the process, we compute the ratio of BERTLARGE's time and cost to that of each model as the normalized measure as shown in Table TABREF8 and Table TABREF9.", "For inference phase, we follow the principles in fune-tuning except we use the time and cost of inference for benchmarking." ], [ "The selection of performance cutoff could be very critical because we consider certrain models being qualified after reaching certrain performance on development set. Meanwhile, certrain tasks can reach a “sweet point” where after relatively smaller amount of training time, the model reaches performance close to the final results despite negelagible difference. We select the cut-off performance threshold by obersvering the recent state-of-the-art performance on selected tasks." ], [ "Submissions can be made to our benchmark through sending code and results to our Hulk benchmark CodaLab competition following the guidelines in both our FAQ part of website and competition introduction. We require the submissions to include detailed end-to-end model training information including model run time, cost(cloud based machine only), parameter number and part of the development set output for result validation. A training / fine-tuning log including time consumption and dev set performance after certain steps is also required. For inference, development set output, time consumption and hardware / software details should be provided. In order for model reproducity, source code is required." ], [ "For computation-heavy tasks, we adopt the reported resource requirements in the original papers as the pretraining phase baselines.", "For fine-tuning and inference phase, we conduct extensive experiments on given hardware (GTX 2080Ti GPU) with different model settings as shown in Table TABREF8 and Table TABREF9. We also collect the devlopment set performance with time in fine-tuning to investigate in how the model are fine-tuned for different tasks.", "In our fine-tuning setting, we are given a specific hardware and software configuration, we adjust the hyper-parameter to minimize the time required for fine-tuning towards cut-off performance. For example, we choose proper batchsize and learning rate for BERTBASE to make sure the model converges and can reach expected performance as soon as possible with parameter searching.", "As shown in Figure FIGREF15, the fine-tuning performance curve differs a lot among pretrained models. The x-axis denoting time consumed is shown in log-scale for better comparison of different models. None of the models acutally take the lead in all tasks. However, if two pretrained models are in the same family, such as BERTBASE and BERTLARGE, the model with smaller number of parameters tend to converge a bit faster than the other in the NER and SST-2 task. In the MNLI task, such trend does not apply possibly due to increased diffculty level and training instance number which favor larger model capacity.", "Even though ALBERT model has a lot less parameters than BERT, according to Table TABREF1, the fine-tuning time of ALBERT model is significantly more than BERT models. This is probably because ALBERT uses large hidden size and more expensive matrix computation. The parameter sharing technique actually makes it harder to fine-tune the model. RoBERTaLARGE model relatively stable in all tasks." ], [ "GLUE benchmark BIBREF0 is a popular multi-task benchmarking and diagnosis platform providing score evaluating multi-task NLP models considering multiple single task performance. SuperGLUE BIBREF9 further develops the task and enriches the dataset used in evaluation, making the task more challenging. These multi-task benchmarks does not take computation efficiency into consideration but still innovates the development of pretrained models.", "MLPerf BIBREF13 compares training and inference efficiency from hardware perspective, providing helpful resources on hardware selection and model training. Their benchmark is limited to focusing on several typical applications including image classification and machine translation.", "Previous work BIBREF10, BIBREF11 on related topic working towards “Green AI” proposes new metrics like FPO and new principle in efficiency evaluation. We further make more detailed and practical contributions towards model energy efficiency benchmarking. Other work like DAWNBenchmark BIBREF12 looks into the area of end-to-end model efficiency comparison for both computer vision and NLP task SQuAD. The benchmark does not compare multi-task efficiency performance and covered only one NLP task.", "The Efficient NMT shared task of The 2nd Workshop on Neural Machine Translation and Generation proposed efficiency track to compare neural machine translation models' inference time. Our platform covers more phases and support multi-task comparison." ], [ "We developed the Hulk platform focusing on the energy efficiency evaluation of NLP models based on their end-to-end performance on selected NLP tasks. The Hulk platform compares models in pretraining, fine-tuning and inference phase, making it clear to follow and propose more training and inference efficient models. We have compared the fine-tuning efficiency of given models during baseline testing and demonstrated more parameters lead to slower fine-tuning when using same model but does not hold when model changes.We expect more submissions in the future to flourish and enrich our benchmark." ], [ "This work is supported by the Institute of Energy Efficiency (IEE) at UCSB's seed grant in Summer 2019 to improve the energy efficiency of AI and machine learning.." ] ], "section_name": [ "Introduction", "Benchmark Overview", "Benchmark Overview ::: Dataset Overview", "Benchmark Overview ::: Evaluation Criteria", "Benchmark Overview ::: Performance Cut-off Selection", "Benchmark Overview ::: Submission to Benchmark", "Baseline Settings and Analysis", "Related Work", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "8dbaaaf5f00c916eb0c8489aac1e64d873dfa347" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ], "extractive_spans": [], "free_form_answer": "$1,728", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "4c7a36332b74b5ca58e5eacd28bed7d73281a915" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ], "extractive_spans": [], "free_form_answer": "BERT, XLNET RoBERTa, ALBERT, DistilBERT", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "zero", "zero" ], "paper_read": [ "no", "no" ], "question": [ "How much does it minimally cost to fine-tune some model according to benchmarking framework?", "What models are included in baseline benchmarking results?" ], "question_id": [ "02417455c05f09d89c2658f39705ac1df1daa0cd", "6ce057d3b88addf97a30cb188795806239491154" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are estimated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million.", "Table 2: Dataset Information", "Figure 1: Screenshot of the leaderboard of website.", "Table 3: Multi-task Baseline Fine-tuning Costs. Time is given in seconds and score is computed by the division of TimeBERTLARGE /Timemodel.The experiments are conducted on a single GTX 2080 Ti GPU following the evaluation ceriteria. The overall score is computed by summing up scores of each individual task. For cost based leaderboads, we also use the budget to compute a new score for each task and summarize similarly. “N/A” means fail to reach the given performance after 5 epochs.", "Table 4: Multi-task Baseline Inference Costs. Time is given in milliseconds and score is computed by the division of TimeBERTLARGE /Timemodel.The experiments are conducted on a single GTX 2080 Ti GPU following the evaluation ceriteria similar to fine-tuning part. It’s clear that the inference time between tasks is more consistent compared to fine-tuning phase.", "Figure 2: The comparison between different pretrained models for CoNLL 2003, SST-2 and MNLI datasets trained on a single GTX 2080Ti GPU. The curves are smoothed by computing average with 2 adjacent data points. The experiments are conducted by selecting hyper-parameters to minimize the time consumption yet making sure the model can converge after certain amount of time. Results are demonstrated using performance on development score after certain steps finetuned on the training dataset." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Figure1-1.png", "4-Table3-1.png", "5-Table4-1.png", "6-Figure2-1.png" ] }
[ "How much does it minimally cost to fine-tune some model according to benchmarking framework?", "What models are included in baseline benchmarking results?" ]
[ [ "2002.05829-2-Table1-1.png" ], [ "2002.05829-2-Table1-1.png" ] ]
[ "$1,728", "BERT, XLNET RoBERTa, ALBERT, DistilBERT" ]
558
1912.00864
Conclusion-Supplement Answer Generation for Non-Factoid Questions
This paper tackles the goal of conclusion-supplement answer generation for non-factoid questions, which is a critical issue in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI), as users often require supplementary information before accepting a conclusion. The current encoder-decoder framework, however, has difficulty generating such answers, since it may become confused when it tries to learn several different long answers to the same non-factoid question. Our solution, called an ensemble network, goes beyond single short sentences and fuses logically connected conclusion statements and supplementary statements. It extracts the context from the conclusion decoder's output sequence and uses it to create supplementary decoder states on the basis of an attention mechanism. It also assesses the closeness of the question encoder's output sequence and the separate outputs of the conclusion and supplement decoders as well as their combination. As a result, it generates answers that match the questions and have natural-sounding supplementary sequences in line with the context expressed by the conclusion sequence. Evaluations conducted on datasets including "Love Advice" and "Arts & Humanities" categories indicate that our model outputs much more accurate results than the tested baseline models do.
{ "paragraphs": [ [ "Question Answering (QA) modules play particularly important roles in recent dialog-based Natural Language Understanding (NLU) systems, such as Apple's Siri and Amazon's Echo. Users chat with AI systems in natural language to get the answers they are seeking. QA systems can deal with two types of question: factoid and non-factoid ones. The former sort asks, for instance, for the name of a thing or person such as “What/Who is $X$?”. The latter sort includes more diverse questions that cannot be answered by a short fact. For instance, users may ask for advice on how to make a long-distance relationship work well or for opinions on public issues. Significant progress has been made in answering factoid questions BIBREF0, BIBREF1; however, answering non-factoid questions remains a challenge for QA modules.", "Long short term memory (LSTM) sequence-to-sequence models BIBREF2, BIBREF3, BIBREF4 try to generate short replies to the short utterances often seen in chat systems. Evaluations have indicated that these models have the possibility of supporting simple forms of general knowledge QA, e.g. “Is the sky blue or black?”, since they learn commonly occurring sentences in the training corpus. Recent machine reading comprehension (MRC) methods BIBREF5, BIBREF6 try to return a single short answer to a question by extracting answer spans from the provided passages. Unfortunately, they may generate unsatisfying answers to regular non-factoid questions because they can easily become confused when learning several different long answers to the same non-factoid question, as pointed out by BIBREF7, BIBREF8.", "This paper tackles a new problem: conclusion-supplement answer generation for non-factoid questions. Here, the conclusion consists of sentences that directly answer the question, while the supplement consists of information supporting the conclusion, e.g., reasons or examples. Such conclusion-supplement answers are important for helping questioners decide their actions, especially in NLU. As described in BIBREF9, users prefer a supporting supplement before accepting an instruction (i.e., a conclusion). Good debates also include claims (i.e., conclusions) about a topic and supplements to support them that will allow users to reach decisions BIBREF10. The following example helps to explain how conclusion-supplement answers are useful to users: “Does separation by a long distance ruin love?” Current methods tend to answer this question with short and generic replies, such as, “Distance cannot ruin true love”. The questioner, however, is not likely to be satisfied with such a trite answer and will want to know how the conclusion was reached. If a supplemental statement like “separations certainly test your love” is presented with the conclusion, the questioner is more likely to accept the answer and use it to reach a decision. Furthermore, there may be multiple answers to a non-factoid question. For example, the following answer is also a potential answer to the question: “distance ruins most relationships. You should keep in contact with him”. The current methods, however, have difficulty generating such conclusion-supplement answers because they can become easily confused when they try to learn several different and long answers to a non-factoid question.", "To address the above problem, we propose a novel architecture, called the ensemble network. It is an extension of existing encoder-decoder models, and it generates two types of decoder output sequence, conclusion and supplement. It uses two viewpoints for selecting the conclusion statements and supplementary statements. (Viewpoint 1) The context present in the conclusion decoder's output is linked to supplementary-decoder output states on the basis of an attention mechanism. Thus, the context of the conclusion sequence directly impacts the decoder states of the supplement sequences. This, as a result, generates natural-sounding supplementary sequences. (Viewpoint 2) The closeness of the question sequence and conclusion (or supplement) sequence as well as the closeness of the question sequence with the combination of conclusion and supplement sequences is considered. By assessing the closeness at the sentence level and sentence-combination level in addition to at the word level, it can generate answers that include good supplementary sentences following the context of the conclusion. This avoids having to learn several different conclusion-supplement answers assigned to a single non-factoid question and generating answers whose conclusions and supplements are logically inconsistent with each other.", "Community-based QA (CQA) websites tend to provide answers composed of conclusion and supplementary statements; from our investigation, 77% of non-factoid answers (love advice) in the Oshiete-goo (https://oshiete.goo.ne.jp) dataset consist of these two statement types. The same is true for 82% of the answers in the Yahoo non-factoid dataset related to the fields of social science, society & culture and arts & humanities. We used the above-mentioned CQA datasets in our evaluations, since they provide diverse answers given by many responders. The results showed that our method outperforms existing ones at generating correct and natural answers. We also conducted an love advice service in Oshiete goo to evaluate the usefulness of our ensemble network." ], [ "The encoder-decoder framework learns how to transform one representation into another. Contextual LSTM (CLSTM) incorporates contextual features (e.g., topics) into the encoder-decoder framework BIBREF11, BIBREF12. It can be used to make the context of the question a part of the answer generation process. HieRarchical Encoder Decoder (HRED) BIBREF12 extends the hierarchical recurrent encoder-decoder neural network into the dialogue domain; each question can be encoded into a dense context vector, which is used to recurrently decode the tokens in the answer sentences. Such sequential generation of next statement tokens, however, weakens the original meaning of the first statement (question). Recently, several models based on the Transformer BIBREF13, such as for passage ranking BIBREF14, BIBREF15 and answer selection BIBREF16, have been proposed to evaluate question-answering systems. There are, however, few Transformer-based methods that generate non-factoid answers.", "Recent neural answer selection methods for non-factoid questions BIBREF17, BIBREF18, BIBREF19 learn question and answer representations and then match them using certain similarity metrics. They use open datasets stored at CQA sites like Yahoo! Answers since they include many diverse answers given by many responders and thus are good sources of non-factoid QA training data. The above methods, however, can only select and extract answer sentences, they do not generate them.", "Recent machine reading comprehension methods try to answer a question with exact text spans taken from provided passages BIBREF20, BIBREF6, BIBREF21, BIBREF22. Several studies on the MS-MARCO dataset BIBREF23, BIBREF5, BIBREF8 define the task as using multiple passages to answer a question where the words in the answer are not necessarily present in the passages. Their models, however, require passages other than QA pairs for both training and testing. Thus, they cannot be applied to CQA datasets that do not have such passages. Furthermore, most of the questions in their datasets only have a single answer. Thus, we think their purpose is different from ours; generating answers for non-factoid questions that tend to demand diverse answers.", "There are several complex QA tasks such as those present in the TREC complex interactive QA tasks or DUC complex QA tasks. Our method can be applied to those non-factoid datasets if an access fee is paid." ], [ "This section describes our conclusion-supplement answer generation model in detail. An overview of its architecture is shown in Figure FIGREF3.", "Given an input question sequence ${\\bf {Q}} = \\lbrace {\\bf {q}}_1, \\cdots , {\\bf {q}}_i, \\cdots , {\\bf {q}}_{N_q}\\rbrace $, the proposal outputs a conclusion sequence ${\\bf {C}} = \\lbrace {\\bf {c}}_1, \\cdots , {\\bf {c}}_t, \\cdots , {\\bf {c}}_{N_c}\\rbrace $, and supplement sequence ${\\bf {S}} = \\lbrace {\\bf {s}}_1, \\cdots , {\\bf {s}}_t, \\cdots , {\\bf {s}}_{N_s}\\rbrace $. The goal is to learn a function mapping from ${\\bf {Q}}$ to ${\\bf {C}}$ and ${\\bf {S}}$. Here, ${\\bf {q}}_i$ denotes a one-of-$K$ embedding of the $i$-th word in an input sequence of length $N_q$. ${\\bf {c}}_t$ (${\\bf {s}}_t$) denotes a one-of-$K$ embedding of the $t$-th word in an input sequence of length $N_c$ ($N_s$)." ], [ "The encoder converts the input $\\bf {Q}$ into a question embedding, ${\\bf {O}}_q$, and hidden states, ${\\bf {H}}={\\lbrace {\\bf {h}}_i\\rbrace _i}$.", "Since the question includes several pieces of background information on the question, e.g. on the users' situation, as well as the question itself, it can be very long and composed of many sentences. For this reason, we use the BiLSTM encoder, which encodes the question in both directions, to better capture the overall meaning of the question. It processes both directions of the input, $\\lbrace {\\bf {q}}_1, \\cdots , {\\bf {q}}_{N_q}\\rbrace $ and $\\lbrace {\\bf {q}}_{N_q}, \\cdots , {\\bf {q}}_{1}\\rbrace $, sequentially. At time step $t$, the encoder updates the hidden state by:", "where $f()$ is an LSTM unit, and ${\\bf {h}}^f_i$ and ${\\bf {h}}^b_i$ are hidden states output by the forward-direction LSTM and backward-direction LSTM, respectively.", "We also want to reflect sentence-type information such as conclusion type or supplement type in sequence-to-sequence learning to better understand the conclusion or supplement sequences. We achieve this by adding a sentence type vector for conclusion $\\bf {C}$ or for supplement $\\bf {S}$ to the input gate, forget gate output gate, and cell memory state in the LSTM model. This is equivalent to processing a composite input [${\\bf {q}}_i$, $\\bf {C}$] or [${\\bf {q}}_i$, $\\bf {S}$] in the LSTM cell that concatenates the word embedding and sentence-type embedding vectors. We use this modified LSTM in the above BiLSTM model as:", "When encoding the question to decode the supplement sequence, ${\\bf {S}}$ is input instead of ${\\bf {C}}$ in the above equation.", "The BiLSTM encoder then applies a max-pooling layer to all hidden vectors to extract the most salient signal for each word. As a result, it generates a fixed-sized distributed vector representation for the conclusion, ${\\bf {O}}^c_q$, and another for the supplement, ${\\bf {O}}^s_q$. ${\\bf {O}}^c_q$ and ${\\bf {O}}^s_q$ are different since the encoder is biased by the corresponding sentence-type vector, $\\bf {C}$ or $\\bf {S}$.", "As depicted in Figure FIGREF3, the BiLSTM encoder processes each word with a sentence-type vector (i.e. $\\bf {C}$ or $\\bf {S}$) and the max-pooling layer to produce the question embedding ${\\bf {O}}^c_q$ or ${\\bf {O}}^s_q$. These embeddings are used as context vectors in the decoder network for the conclusion and supplement." ], [ "The decoder is composed of a conclusion decoder and supplement decoder. Here, let ${\\bf {h}}^{\\prime }_t$ be the hidden state of the $t$-th LSTM unit in the conclusion decoder. Similar to the encoder, the decoder also decodes a composite input [${\\bf {c}}_t$, $\\bf {C}$] in an LSTM cell that concatenates the conclusion word embedding and sentence-type embedding vectors. It is formulated as follows:", "where $f^{\\prime }()$ denotes the conclusion decoder LSTM, $\\operatornamewithlimits{softmax}_c$ the probability of word $c$ given by a softmax layer, $c_t$ the $t$-th conclusion decoded token, and ${\\bf {c}}_t$ the word embedding of $c_t$. The supplement decoder's hidden state ${\\bf {h}}^{\\prime \\prime }_t$ is computed in the same way with ${\\bf {h}}^{\\prime }_t$; however, it is updated in the ensemble network described in the next subsection.", "As depicted in Figure FIGREF3, the LSTM decoder processes tokens according to question embedding ${\\bf {O}}^c_q$ or ${\\bf {O}}^s_q$, which yields a bias corresponding to the sentence-type vector, $\\bf {C}$ or $\\bf {S}$. The output states are then input to the ensemble network." ], [ "The conventional encoder-decoder framework often generates short and simple sentences that fail to adequately answer non-factoid questions. Even if we force it to generate longer answers, the decoder output sequences become incoherent when read from the beginning to the end.", "The ensemble network solves the above problem by (1) passing the context from the conclusion decoder's output sequence to the supplementary decoder hidden states via an attention mechanism, and (2) considering the closeness of the encoder's input sequence to the decoders' output sequences as well as the closeness of the encoder's input sequence to the combination of decoded output sequences.", "(1) To control the context, we assess all the information output by the conclusion decoder and compute the conclusion vector, ${\\bf {O}}_c$. ${\\bf {O}}_c$ is a sentence-level representation that is more compact, abstractive, and global than the original decoder output sequence. To get it, we apply BiLSTM to the conclusion decoder's output states $\\lbrace {{{\\tilde{\\bf {y}}}}_t^c} \\rbrace _t$; i.e., $\\lbrace {{{\\tilde{\\bf {y}}}}_t^c} \\rbrace _t = \\lbrace {\\bf {U}}\\cdot \\operatornamewithlimits{softmax}({\\bf {h}}^{\\prime }_t)\\rbrace _t$, where word representation matrix $\\bf {U}$ holds the word representations in its columns. At time step $t$, the BiLSTM encoder updates the hidden state by:", "where ${\\bf {h}}^{c,f}_t$ and ${\\bf {h}}^{c,b}_t$ are the hidden states output by the forward LSTM and backward LSTM in the conclusion encoder, respectively. It applies a max-pooling layer to all hidden vectors to extract the most salient signal for each word to compute the embedding for conclusion ${\\bf {O}}_c$. Next, it computes the context vector ${\\bf {cx}}_t$ at the $t$-th step by using the $(t\\!\\!-\\!\\!1)$-th output hidden state of the supplement decoder, ${\\bf {h}}^{\\prime \\prime }_{t\\!-\\!1}$, weight matrices, ${\\bf {V}}_a$ and ${\\bf {W}}_a$, and a sigmoid function, $\\sigma $:", "This computation lets our ensemble network extract a conclusion-sentence level context. The resulting supplement sequences follow the context of the conclusion sequence. Finally, ${{\\bf {h}}}^{\\prime \\prime }_t$ is computed as:", "$z$ can be $i$, $f$, or $o$, which represent three gates (e.g., input ${\\bf {i}}_t$, forget ${\\bf {f}}_t$, and output ${\\bf {o}}_t$). ${\\bf {l}}_t$ denotes a cell memory vector. ${{\\bf {W}}}^a_z$ and ${{\\bf {W}}}^a_l$ denote attention parameters.", "(2) To control the closeness at the sentence level and sentence-combination level, it assesses all the information output by the supplement decoder and computes the supplement vector, ${\\bf {O}}_s$, in the same way as it computes ${\\bf {O}}_c$. That is, it applies BiLSTM to the supplement decoder's output states $\\lbrace {{{\\tilde{\\bf {y}}}}_t^s} \\rbrace _t$; i.e., $\\lbrace {{{\\tilde{\\bf {y}}}}_t^s} \\rbrace _t = \\lbrace {\\bf {U}}\\!\\cdot \\! \\operatornamewithlimits{softmax}({{\\bf {h}}_t^{\\prime \\prime }})\\rbrace _t$, where the word representations are found in the columns of $\\bf {U}$. Next, it applies a max-pooling layer to all hidden vectors in order to compute the embeddings for supplement ${\\bf {O}}_s$. Finally, to generate the conclusion-supplement answers, it assesses the closeness of the embeddings for the question ${\\bf {O}}_q$ to those for the answer sentences (${\\bf {O}}_c$ or ${\\bf {O}}_s$) and their combination ${\\bf {O}}_c$ and ${\\bf {O}}_s$. The loss function for the above metrics is described in the next subsection.", "As depicted in Figure FIGREF3, the ensemble network computes the conclusion embedding ${\\bf {O}}_c$, the attention parameter weights from ${\\bf {O}}_c$ to the decoder output supplement states (dotted lines represent attention operations), and the supplement embedding ${\\bf {O}}_s$. Then, ${\\bf {O}}_c$ and ${\\bf {O}}_s$ are input to the loss function together with the question embedding ${\\bf {O}}_q = [{\\bf {O}}^c_q,{\\bf {O}}^s_q]$." ], [ "Our model uses a new loss function rather than generative supervision, which aims to maximize the conditional probability of generating the sequential output $p({\\bf {y}}|{\\bf {q}})$. This is because we think that assessing the closeness of the question and an answer sequence as well as the closeness of the question to two answer sequences is useful for generating natural-sounding answers.", "The loss function is for optimizing the closeness of the question and conclusion and that of the question and supplement as well as for optimizing the closeness of the question with the combination of the conclusion and supplement. The training loss ${\\cal {L}}_s$ is expressed as the following hinge loss, where ${\\bf {O}}^{+}$ is the output decoder vector for the ground-truth answer, ${\\bf {O}}^{-}$ is that for an incorrect answer randomly chosen from the entire answer space, $M$ is a constant margin, and $\\mathbb {A}$ is set equal to $\\lbrace [{\\bf {O}}^{+}_c, {\\bf {O}}^{-}_s], [{\\bf {O}}^{-}_c, {\\bf {O}}^{+}_s], [{\\bf {O}}^{-}_c, {\\bf {O}}^{-}_s]\\rbrace $:", "", "The key idea is that ${\\cal {L}}_s$ checks whether or not the conclusion, supplement, and their combination have been well predicted. In so doing, ${\\cal {L}}_s$ can optimize not only the prediction of the conclusion or supplement but also the prediction of the combination of conclusion and supplement.", "The model is illustrated in the upper part of Figure FIGREF3; $({\\bf {O}}_q, {\\bf {O}}_c, {\\bf {O}}_s)$ is input to compute the closeness and sequence combination losses." ], [ "The training loss ${\\cal {L}}_w$ is used to check ${\\cal {L}}_s$ and the cross-entropy loss in the encoder-decoder model. In the following equation, the conclusion and supplement sequences are merged into one sequence $\\bf {Y}$ of length $T$, where $T\\!=\\!N_c\\!+\\!N_s$.", "$\\alpha $ is a parameter to control the weighting of the two losses. We use adaptive stochastic gradient descent (AdaGrad) to train the model in an end-to-end manner. The loss of a training batch is averaged over all instances in the batch.", "Figure FIGREF3 illustrates the loss for the ensemble network and the cross-entropy loss." ], [ "We compared the performance of our method with those of (1) Seq2seq, a seq2seq attention model proposed by BIBREF4; (2) CLSTM, i.e., the CLSTM model BIBREF11; (3) Trans, the Transformer BIBREF13, which has proven effective for common NLP tasks. In these three methods, conclusion sequences and supplement sequences are decoded separately and then joined to generate answers. They give more accurate results than methods in which the conclusion sequences and supplement sequences are decoded sequentially. We also compared (4) HRED, a hierarchical recurrent encoder-decoder model BIBREF12 in which conclusion sequences and supplement sequences are decoded sequentially to learn the context from conclusion to supplement; (5) NAGMWA, i.e., our neural answer generation model without an attention mechanism. This means that NAGMWA does not pass ${\\bf {cx}}_t$ in Eq. (DISPLAY_FORM10) to the decoder, and conclusion decoder and supplement decoder are connected only via the loss function ${\\cal {L}}_s$. In the tables and figures that follow, NAGM means our full model." ], [ "Our evaluations used the following two CQA datasets:" ], [ "The Oshiete-goo dataset includes questions stored in the “love advice” category of the Japanese QA site, Oshiete-goo. It has 771,956 answers to 189,511 questions. We fine-tuned the model using a corpus containing about 10,032 question-conclusion-supplement (q-c-s) triples. We used 2,824 questions from the Oshiete-goo dataset. On average, the answers to these questions consisted of about 3.5 conclusions and supplements selected by human experts. The questions, conclusions, and supplements had average lengths of 482, 41, and 46 characters, respectively. There were 9,779 word tokens in the questions and 6,317 tokens in answers; the overlap was 4,096." ], [ "We also used the Yahoo nfL6 dataset, the largest publicly available English non-factoid CQA dataset. It has 499,078 answers to 87,361 questions. We fine-tuned the model by using questions in the “social science”, “society & culture”, and “arts & humanities” categories, since they require diverse answers. This yielded 114,955 answers to 13,579 questions. We removed answers that included some stop words, e.g. slang words, or those that only refer to some URLs or descriptions in literature, since such answers often become noise when an answer is generated. Human experts annotated 10,299 conclusion-supplement sentences pairs in the answers.", "In addition, we used a neural answer-sentence classifier to classify the sentences into conclusion or supplement classes. It first classified the sentences into supplements if they started with phrases such as “this is because” or “therefore”. Then, it applied a BiLSTM with max-pooling to the remaining unclassified sentences, ${\\bf {A}} = \\lbrace {\\bf {a}}_1, {\\bf {a}}_2, \\cdots , {\\bf {a}}_{N_a}\\rbrace $, and generated embeddings for the un-annotated sentences, ${\\bf {O}}^a$. After that, it used a logistic sigmoid function to return the probabilities of mappings to two discrete classes: conclusion and supplement. This mapping was learned by minimizing the classification errors using the above 10,299 labeled sentences. As a result, we automatically acquired 70,000 question-conclusion-supplement triples from the entire answers. There were 11,768 questions and 70,000 answers. Thus, about 6 conclusions and supplements on average were assigned to a single question. The questions, conclusions, and supplements had average lengths of 46, 87, and 71 characters, respectively. We checked the performance of the classifier; human experts checked whether the annotation results were correct or not. They judged that it was about 81% accurate (it classified 56,762 of 70,000 sentences into correct classes). There were 15,690 word tokens in questions and 124,099 tokens in answers; the overlap was 11,353." ], [ "We conducted three evaluations using the Oshiete-goo dataset; we selected three different sets of 500 human-annotated test pairs from the full dataset. In each set, we trained the model by using training pairs and input questions in test pairs to the model. We repeated the experiments three times by randomly shuffling the train/test sets.", "For the evaluations using the nfL6 dataset, we prepared three different sets of 500 human-annotated test q-c-s triples from the full dataset. We used 10,299 human-annotated triples to train the neural sentence-type classifier. Then, we applied the classifier to the unlabeled answer sentences. Finally, we evaluated the answer generation performance by using three sets of machine-annotated 69,500 triples and 500 human-annotated test triples.", "After training, we input the questions in the test triples to the model to generate answers for both datasets. We compared the generated answers with the correct answers. The results described below are average values of the results of three evaluations.", "The softmax computation was slow since there were so many word tokens in both datasets. Many studies BIBREF24, BIBREF25, BIBREF3 restricted the word vocabulary to one based on frequency. This, however, narrows the diversity of the generated answers. Since diverse answers are necessary to properly reply to non-factoid questions, we used bigram tokens instead of word tokens to speed up the computation without restricting the vocabulary. Accordingly, we put 4,087 bigram tokens in the Oshiete-goo dataset and 11,629 tokens in the nfL6 dataset.", "To measure performance, we used human judgment as well as two popular metrics BIBREF2, BIBREF25, BIBREF4 for measuring the fluency of computer-generated text: ROUGE-L BIBREF26 and BLEU-4 BIBREF27. ROUGE-L is used for measuring the performance for evaluating non-factoid QAs BIBREF28, however, we also think human judgement is important in this task." ], [ "For both datasets, we tried different parameter values and set the size of the bigram token embedding to 500, the size of LSTM output vectors for the BiLSTMs to $500 \\times 2$, and number of topics in the CLSTM model to 15. We tried different margins, $M$, in the hinge loss function and settled on $0.2$. The iteration count $N$ was set to 100.", "We varied $\\alpha $ in Eq. (DISPLAY_FORM13) from 0 to 2.0 and checked the impact of $L_s$ by changing $\\alpha $. Table TABREF18 shows the results. When $\\alpha $ is zero, the results are almost as poor as those of the seq2seq model. On the other hand, while raising the value of $\\alpha $ places greater emphasis on our ensemble network, it also degrades the grammaticality of the generated results. We set $\\alpha $ to 1.0 after determining that it yielded the best performance. This result clearly indicates that our ensemble network contributes to the accuracy of the generated answers.", "A comparison of our full method NAGM with the one without the sentence-type embedding (we call this method w/o ste) that trains separate decoders for two types of sentences is shown in Table TABREF19. The result indicated that the existence of the sentence type vector, $\\bf {C}$ or $\\bf {S}$, contributes the accuracy of the results since it distinguishes between sentence types." ], [ "The results for Oshiete-goo are shown in Table TABREF20 and those for nfL6 are shown in Table TABREF21. They show that CLSTM is better than Seq2seq. This is because it incorporates contextual features, i.e. topics, and thus can generate answers that track the question's context. Trans is also better than Seq2seq, since it uses attention from the question to the conclusion or supplement more effectively than Seq2seq. HRED failed to attain a reasonable level of performance. These results indicate that sequential generation has difficulty generating subsequent statements that follow the original meaning of the first statement (question).", "NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combinations closely match the questions. Thus, conclusions and supplements in the answers are consistent with each other and avoid confusion made by several different conclusion-supplement answers assigned to a single non-factoid questions. Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. Thus, NAGM generates more fluent sentences by assessing the context from conclusion to supplement sentences in addition to the closeness of the question and sentences as well as that of the question and sentence combinations." ], [ "Following evaluations made by crowdsourced evaluators BIBREF29, we conducted human evaluations to judge the outputs of CLSTM and those of NAGM. Different from BIBREF29, we hired human experts who had experience in Oshiete-goo QA community service. Thus, they were familiar with the sorts of answers provided by and to the QA community.", "The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. Note that our evaluation followed the DUC-style strategy. Here, we mean “grammar” to cover grammaticality, non-redundancy, and referential clarity in the DUC strategy, whereas we mean the “content matched the questions” to refer to “focus” and “structure and coherence” in the DUC strategy. The evaluators were given more than a week to carefully evaluate the generated answers, so we consider that their judgments are reliable. Each expert evaluated 50 questions. We combined the scores of the experts by summing them. They did not know the identity of the system in the evaluation and reached their decisions independently.", "Table TABREF22 and Table TABREF22 present the results. The numbers are percentages. Table 7 presents examples of questions and answers. For Oshiete-goo results, the original Japanese and translated English are presented. The questions are very long and include long background descriptions before the questions themselves.", "These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM. This is because, as can be seen in Table 7, NAGM generated longer and better question-related sentences than CLSTM did. NAGM generated grammatically good answers whose conclusion and supplement statements are well matched with the question and the supplement statement naturally follows the conclusion statement." ], [ "The encoder-decoder network tends to re-generate answers in the training corpus. On the other hand, NAGM can generate answers not present in the corpus by virtue of its ensemble network that considers contexts and sentence combinations.", "Table 7 lists some examples. For example, answer #1 generated by NAGM is not in the training corpus. We think it was generated from the parts in italics in the following three sentences that are in the corpus: (1) “I think that it is better not to do anything from your side. If there is no reaction from him, it is better not to do anything even if there is opportunity to meet him next.” (2) “I think it may be good for you to approach your lover. Why don't you think positively about it without thinking too pessimistically?” (3) “Why don't you tell your lover that you usually do not say what you are thinking. $\\cdots $ I think that it is important to communicate the feelings to your lover; how you like or care about him/her especially when you are quarreling with each other.”", "The generation of new answers is important for non-factoid answer systems, since they must cope with slight differences in question contexts from those in the corpus." ], [ "Our ensemble network is currently being used in the love advice service of Oshiete goo BIBREF30. The service uses only the ensemble network to ensure that the service offers high-quality output free from grammar errors. We input the sequences in our evaluation corpus instead of the decoder output sequences into the ensemble network. Our ensemble network then learned the optimum combination of answer sequences as well as the closeness of the question and those sequences. As a result, it can construct an answer that corresponds to the situation underlying the question. In particular, 5,702 answers created by the AI, whose name is Oshi-el (Oshi-el means teaching angel), using our ensemble network in reply to 33,062 questions entered from September 6th, 2016 to November 17th, 2019, were judged by users of the service as good answers. Oshi-el output good answers at about twice the rate of the average human responder in Oshiete-goo who answered more than 100 questions in the love advice category. Thus, we think this is a good result.", "Furthermore, to evaluate the effectiveness of the supplemental information, we prepared 100 answers that only contained conclusion sentences during the same period of time. As a result, users rated the answers that contained both conclusion and supplement sentences as good 1.6 times more often than those that contained only conclusion sentences. This shows that our method successfully incorporated supplemental information in answering non-factoid questions." ], [ "We tackled the problem of conclusion-supplement answer generation for non-factoid questions, an important task in NLP. We presented an architecture, ensemble network, that uses an attention mechanism to reflect the context of the conclusion decoder's output sequence on the supplement decoder's output sequence. The ensemble network also assesses the closeness of the encoder input sequence to the output of each decoder and the combined output sequences of both decoders. Evaluations showed that our architecture was consistently superior to conventional encoder-decoders in this task. The ensemble network is now being used in the “Love Advice,” service as mentioned in the Evaluation section.", "Furthermore, our method, NAGM, can be generalized to generate much longer descriptions other than conclusion-supplement answers. For example, it is being used to generate Tanka, which is a genre of classical Japanese poetry that consists of five lines of words, in the following way. The first line is input by a human user to NAGM as a question, and NAGM generates second line (like a conclusion) and third line (like a supplement). The third line is again input to NAGM as a question, and NAGM generates the fourth line (like a conclusion) and fifth line (like a supplement)." ] ], "section_name": [ "Introduction", "Related work", "Model", "Model ::: Encoder", "Model ::: Decoder", "Model ::: Ensemble network", "Model ::: Loss function of ensemble network", "Model ::: Training", "Evaluation ::: Compared methods", "Evaluation ::: Dataset", "Evaluation ::: Dataset ::: Oshiete-goo", "Evaluation ::: Dataset ::: nfL6", "Evaluation ::: Methodology", "Evaluation ::: Parameter setup", "Evaluation ::: Results ::: Performance", "Evaluation ::: Results ::: Human evaluation", "Evaluation ::: Results ::: Generating answers missing from the corpus", "Evaluation ::: Results ::: Online evaluation in “Love Advice” service", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "4dad94cc08e748b6b136ea2a0f974adc393ccc66" ], "answer": [ { "evidence": [ "NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combinations closely match the questions. Thus, conclusions and supplements in the answers are consistent with each other and avoid confusion made by several different conclusion-supplement answers assigned to a single non-factoid questions. Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. Thus, NAGM generates more fluent sentences by assessing the context from conclusion to supplement sentences in addition to the closeness of the question and sentences as well as that of the question and sentence combinations.", "The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. Note that our evaluation followed the DUC-style strategy. Here, we mean “grammar” to cover grammaticality, non-redundancy, and referential clarity in the DUC strategy, whereas we mean the “content matched the questions” to refer to “focus” and “structure and coherence” in the DUC strategy. The evaluators were given more than a week to carefully evaluate the generated answers, so we consider that their judgments are reliable. Each expert evaluated 50 questions. We combined the scores of the experts by summing them. They did not know the identity of the system in the evaluation and reached their decisions independently.", "These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM. This is because, as can be seen in Table 7, NAGM generated longer and better question-related sentences than CLSTM did. NAGM generated grammatically good answers whose conclusion and supplement statements are well matched with the question and the supplement statement naturally follows the conclusion statement.", "FLOAT SELECTED: Table 4: ROUGE-L/BLEU-4 for nfL6.", "FLOAT SELECTED: Table 6: Human evaluation (nfL6)." ], "extractive_spans": [], "free_form_answer": "For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%. ", "highlighted_evidence": [ "Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. ", "The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. ", "These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM.", "FLOAT SELECTED: Table 4: ROUGE-L/BLEU-4 for nfL6.", "FLOAT SELECTED: Table 6: Human evaluation (nfL6)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "ea4394112c1549185e6b763d6f36733a9f2ed794" ] } ], "nlp_background": [ "two" ], "paper_read": [ "no" ], "question": [ "How much more accurate is the model than the baseline?" ], "question_id": [ "572458399a45fd392c3a4e07ce26dcff2ad5a07d" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "" ], "topic_background": [ "familiar" ] }
{ "caption": [ "Figure 1: Neural conclusion-supplement answer generation model.", "Table 1: Results when changing α.", "Table 2: Results when using sentence-type embeddings.", "Table 6: Human evaluation (nfL6).", "Table 4: ROUGE-L/BLEU-4 for nfL6.", "Table 7: Example answers generated by CLSTM and NAGM. #1 is for Oshiete-goo and #2 for nfL6." ], "file": [ "3-Figure1-1.png", "5-Table1-1.png", "5-Table2-1.png", "6-Table6-1.png", "6-Table4-1.png", "7-Table7-1.png" ] }
[ "How much more accurate is the model than the baseline?" ]
[ [ "1912.00864-Evaluation ::: Results ::: Human evaluation-3", "1912.00864-Evaluation ::: Results ::: Human evaluation-1", "1912.00864-Evaluation ::: Results ::: Performance-1", "1912.00864-6-Table4-1.png", "1912.00864-6-Table6-1.png" ] ]
[ "For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%. " ]
563
1910.11204
Syntax-Enhanced Self-Attention-Based Semantic Role Labeling
As a fundamental NLP task, semantic role labeling (SRL) aims to discover the semantic roles for each predicate within one sentence. This paper investigates how to incorporate syntactic knowledge into the SRL task effectively. We present different approaches of encoding the syntactic information derived from dependency trees of different quality and representations; we propose a syntax-enhanced self-attention model and compare it with other two strong baseline methods; and we conduct experiments with newly published deep contextualized word representations as well. The experiment results demonstrate that with proper incorporation of the high quality syntactic information, our model achieves a new state-of-the-art performance for the Chinese SRL task on the CoNLL-2009 dataset.
{ "paragraphs": [ [ "The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on.", "UTF8gbsn", "Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones.", "In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies.", "For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information.", "Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information.", "In summary, the contributions of our work are:", "We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate.", "We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model.", "We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings." ], [ "Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26.", "BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge.", "In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16.", "For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task." ], [ "In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method." ], [ "Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5." ], [ "The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding.", "Token Embedding includes word embedding, part-of-speech (POS) tag embedding.", "Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence.", "Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows:", "where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding." ], [ "The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows.", "FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation:", "Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows:", "where $Q$ is queries, $K$ is keys, and $V$ is values.", "In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example:", "where $0 \\le i < h$. Keys and values use similar projections.", "On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations.", "where", "More details about multi-head attention can be found in BIBREF13.", "Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as", "where $f(x)$ is implemented by each above module." ], [ "The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short.", "Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding." ], [ "In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath).", "To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_\" respectively to get two label paths, and then concatenate the two label paths with the separator `,'.", "UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)\" and the candidate argument “农业 (agriculture)\" is “鼓励 (encourage)\", so their DepPath is “2,0\" and its RelPath is “COMP_COMP,\".", "We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel.", "UTF8gbsn" ], [ "To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding:", "where $\\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath." ], [ "BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing.", "Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows.", "We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows:", "where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath." ], [ "Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence.", "Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows:", "where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers." ], [ "Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies.", "Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\\frac{1}{\\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer.", "For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations.", "Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting.", "Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30.", "Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$.", "To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers.", "We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training.", "We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\\epsilon =10^{-6}$ and $\\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words.", "All the hyper-parameters are tuned on the development set.", "Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36." ], [ "We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset.", "Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert)." ], [ "Apart from Dep and Rel, we also use DepPath and RelPath to encode the syntactic knowledge. In this subsection, we conduct experiments to compare different syntactic encoding in our SRL model. We base the experiments on our RelAwe model, since it is easier to incorporate different representations for comparison. When generating the RelPath, we filter the paths 1) when the dependency distance between the predicate and the candidate argument is more than 4, and 2) when the RelPath's frequency is less than 10.", "No matter in which representation, dependency label information is more important than the head and the combination of the two achieves better performance as our experiment results in Table TABREF41 show. Furthermore, using Biaffine dependency trees, DepPath and RelPath perform better than Dep and Rel. This is because of the capability of DepPath and RelPath to capture more structural information of the dependency trees.", "Comparing Table TABREF37 and TABREF41, when using gold dependencies, DepPath&RelPath can achieve much better result than Dep&Rel. But with the Auto trees, DepPath&RelPath is much worse. Therefore, structural information is much more sensitive to the quality of dependency trees due to error propagation." ], [ "[9]From the mechanism of LISA, we can find that the replaced attention head can't copy the syntactic dependency heads from DepPath.", "This subsection discusses the effectiveness of different incorporation methods of the syntactic knowledge. We take Biaffine's output as our dependency information for the comparison.", "Firstly, results in Table TABREF44 show that with little dependency information (Dep), LISA performs better, while incorporating richer syntactic knowledge (Dep&Rel or Dep&RelPath), three methods achieve similar performance. Overall, RelAwe achieves best results given enough syntactic knowledge.", "Secondly, Input and LISA achieve much better performance when we combine the dependency head information and the relation, while BIBREF15 have not introduced relation information to the LISA model and BIBREF9 have not combined the head and relation information either. Our proposed RelAwe method with DepPath&RelPath representation performs the best, which encodes the richest syntactic knowledge.", "Lastly, under the same settings, LISA and RelAwe perform better than Input, which indicates the importance of the location where the model incorporates the syntax, the input layer vs. the encoder layer." ], [ "Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45.", "The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further." ], [ "Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison.", "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task." ], [ "We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods." ], [ "This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one.", "Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future." ] ], "section_name": [ "Introduction", "Related work", "Approaches", "Approaches ::: The Basic Architecture", "Approaches ::: The Basic Architecture ::: Input Layer", "Approaches ::: The Basic Architecture ::: Encoder Layer", "Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Head & Relation", "Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Path & Relation Path", "Approaches ::: Incorporation Methods ::: Input Embedding Concatenation", "Approaches ::: Incorporation Methods ::: LISA", "Approaches ::: Incorporation Methods ::: Relation-Aware Self-Attention", "Experiment ::: Settings", "Experiment ::: Quality of the Syntactic Dependencies", "Experiment ::: Representation of the Syntactic Dependencies", "Experiment ::: Incorporation Methods", "Experiment ::: External Resources", "Experiment ::: Final Results on the Chinese Test Data", "Experiment ::: Results on the English Data", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "e939615a4ca7e4e5b67ab7c21b0cb07526f873eb" ], "answer": [ { "evidence": [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task." ], "extractive_spans": [ "our Open model achieves more than 3 points of f1-score than the state-of-the-art result" ], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "916fc40551dc10dd6cdf2d1b27034e8f3dd58024" ], "answer": [ { "evidence": [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task.", "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ], "extractive_spans": [], "free_form_answer": "In closed setting 84.22 F1 and in open 87.35 F1.", "highlighted_evidence": [ "Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings.", "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "dde033031c260f12226e1a6cefacd3ea19ca0725" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "4db8a2adebdd2c8cd8e75aaae248d08c289700f4" ], "answer": [ { "evidence": [ "The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short.", "In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath)." ], "extractive_spans": [ "dependency head and dependency relation label, denoted as Dep and Rel for short", "Tree-based Position Feature (TPF) as Dependency Path (DepPath)", "Shortest Dependency Path (SDP) as Relation Path (RelPath)" ], "free_form_answer": "", "highlighted_evidence": [ "The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short.", "In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "7766a11104ae5b54ae1752dd3180103bad0063f7" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ], "extractive_spans": [], "free_form_answer": "Marcheggiani and Titov (2017) and Cai et al. (2018)", "highlighted_evidence": [ "FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "How big is improvement over the old state-of-the-art performance on CoNLL-2009 dataset?", "What is new state-of-the-art performance on CoNLL-2009 dataset?", "How big is CoNLL-2009 dataset?", "What different approaches of encoding syntactic information authors present?", "What are two strong baseline methods authors refer to?" ], "question_id": [ "cb4727cd5643dabc3f5c95e851d5313f5d979bdc", "33d864153822bd378a98a732ace720e2c06a6bc6", "b13cf4205f3952c3066b9fb81bd5c4277e2bc7f5", "86f24ecc89e743bb1534ac160d08859493afafe9", "bab8c69e183bae6e30fc362009db9b46e720225e" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: An example of one sentence with its syntactic dependency tree and semantic roles. Arcs above the sentence are semantic role annotations for the predicate “鼓励 (encourage)” and below the sentence are syntactic dependency annotations of the whole sentence. The meaning of this sentence is “China encourages foreign merchants to invest in agriculture”.", "Figure 2: Architecture of our syntax-enhanced selfattention-based SRL model. Red dotted arrows indicate different locations where we incorporate linguistic knowledge in different forms. The dotted box on the upper right is the detailed composition of the selfattention block.", "Figure 3: The syntactic dependency tree of the sentence “中国鼓励外商投资农业” (China encourages foreign merchants to invest in agriculture). Numbers in brackets are the DEPPATH for each candidate argument with the predicate “鼓励 (encourage)”. Light grey labels on the arcs are the syntactic dependency labels.", "Table 1: Syntactic dependency performance for different parsers. AUTO indicates the automatic dependency trees provided by the CoNLL-09 Chinese dataset. BIAFFINE means the trees are generated by BiaffineParser with pre-trained word embedding on the Gigaword corpus while BIAFFINEBERT is the same parser with BERT. We use the labeled accuracy score (LAS) and unlabeled accuracy score (UAS) to measure the quality of syntactic dependency trees.", "Figure 4: Attention matrix of the replaced attention head in the LISA model. The left matrix is the original softmax attention, and the right is a one-hot matrix copied from the syntactic dependency head results.", "Table 2: A glossary of abbreviations for different system configurations in our experiments.", "Table 3: SRL results with dependency trees of different quality on the Chinese dev set. These experiments are conducted on the RELAWE model with DEP&REL representations.", "Table 4: SRL results with different syntactic representations on the Chinese dev set. Experiments are conducted on the RELAWE method.", "Table 5: SRL results with different incorporation methods of the syntactic information on the Chinese dev set. Experiments are conducted on the BIAFFINE parsing results.", "Table 6: SRL results with different external knowledge on the Chinese dev set. We use the RELAWE model and DEPPATH&RELPATH syntax representation.", "Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model.", "Table 8: SRL results on the English test set. We use syntactic dependency results generated by BiaffineParser (On test set, syntactic performance is: UAS = 94.35%, and LAS = 92.54%, which improves about 6% compared to automatic trees in CoNLL2009.)." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Table1-1.png", "5-Figure4-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "7-Table6-1.png", "8-Table7-1.png", "9-Table8-1.png" ] }
[ "What is new state-of-the-art performance on CoNLL-2009 dataset?", "What are two strong baseline methods authors refer to?" ]
[ [ "1910.11204-Experiment ::: Final Results on the Chinese Test Data-1", "1910.11204-8-Table7-1.png" ], [ "1910.11204-8-Table7-1.png" ] ]
[ "In closed setting 84.22 F1 and in open 87.35 F1.", "Marcheggiani and Titov (2017) and Cai et al. (2018)" ]
564
2003.07758
Multi-modal Dense Video Captioning
Dense video captioning is a task of localizing interesting events from an untrimmed video and producing textual description (captions) for each localized event. Most of the previous works in dense video captioning are solely based on visual information and completely ignore the audio track. However, audio, and speech, in particular, are vital cues for a human observer in understanding an environment. In this paper, we present a new dense video captioning approach that is able to utilize any number of modalities for event description. Specifically, we show how audio and speech modalities may improve a dense video captioning model. We apply automatic speech recognition (ASR) system to obtain a temporally aligned textual description of the speech (similar to subtitles) and treat it as a separate input alongside video frames and the corresponding audio track. We formulate the captioning task as a machine translation problem and utilize recently proposed Transformer architecture to convert multi-modal input data into textual descriptions. We demonstrate the performance of our model on ActivityNet Captions dataset. The ablation studies indicate a considerable contribution from audio and speech components suggesting that these modalities contain substantial complementary information to video frames. Furthermore, we provide an in-depth analysis of the ActivityNet Caption results by leveraging the category tags obtained from original YouTube videos. The program code of our method and evaluations will be made publicly available.
{ "paragraphs": [ [ "The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence.", "Most recent works in dense video captioning formulate the captioning problem as a machine translation task, where the input is a set of features extracted from the video stream and the output is a natural language sentence. Thus, the captioning methods can be leveraged by recent developments in machine translation field, such as Transformer model BIBREF3. The main idea in the transformer is to utilise self-attention mechanism to model long-term dependencies in a sequence. We follow the recent work BIBREF4 and adopt the transformer architecture in our dense video captioning model.", "The vast majority of previous works are generating captions purely based on visual information BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, almost all videos include an audio track, which could provide vital cues for video understanding. In particular, what is being said by people in the video, might make a crucial difference to the content description. For instance, in a scene when someone knocks the door from an opposite side, we only see the door but the audio helps us to understand that somebody is behind it and wants to enter. Therefore, it is impossible for a model to make a useful caption for it. Also, other types of videos as instruction videos, sport videos, or video lectures could be challenging for a captioning model.", "In contrast, we build our model to utilize video frames, raw audio signal, and the speech content in the caption generation process. To this end, we deploy automatic speech recognition (ASR) system BIBREF11 to extract time-aligned captions of what is being said (similar to subtitles) and employ it alongside with video and audio representations in the transformer model.", "The proposed model is assessed using the challenging ActivityNet Captions BIBREF2 benchmark dataset, where we obtain competitive results to the current state-of-the-art. The subsequent ablation studies indicate a substantial contribution from audio and speech signals. Moreover, we retrieve and perform breakdown analysis by utilizing previously unused video category tags provided with the original YouTube videos BIBREF12. The program code of our model and the evaluation approach will be made publicly available." ], [ "Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption.", "To further improve the performance of the captioning model, several methods have been proposed, including shared memory between visual and textual domains BIBREF26, BIBREF27, spatial and temporal attention BIBREF28, reinforcement learning BIBREF29, semantic tags BIBREF30, BIBREF31, other modalities BIBREF32, BIBREF33, BIBREF34, BIBREF35, and by producing a paragraph instead of one sentence BIBREF36, BIBREF1." ], [ "Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6.", "In order to mitigate the intrinsic difficulties of RNNs to model long-term dependencies in a sequence, Zhou BIBREF4 tailored the recent idea of Transformer BIBREF3 for dense video captioning. In BIBREF7 the authors noticed that the captioning may benefit from interactions between objects in a video and developed recurrent higher-order interaction module to model these interactions. Xiong BIBREF8 noticed that many previous models produced redundant captions, and proposed to generate captions in a progressive manner, conditioned on the previous caption while applying paragraph- and sentence-level rewards. Similarly, a “bird-view” correction and two-level reward maximization for a more coherent story-telling have been employed in BIBREF9.", "Since the human annotation of a video with temporal boundaries and captions for each of them can be laborious, several attempts have been made to address this issue BIBREF40, BIBREF41. Specifically, BIBREF40 employed the idea of cycle-consistency to translate a set of captions to a set of temporal events without any paired annotation, while BIBREF41 automatically-collected dataset of an unparalleled-scale exploiting the structure of instructional videos.", "The most similar work to our captioning model is BIBREF4 that also utilizes a version of the Transformer BIBREF3 architecture. However, their model is designed solely for visual features. Instead, we believe that dense video captioning may benefit from information from other modalities." ], [ "A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41.", "In contrast to the mentioned multi-modal dense video captioning methods: (1) we present the importance of the speech and audio modalities on a domain-free dataset, (2) propose a multi-modal dense video captioning module (MDVC) which can be scaled to any number of modalities." ], [ "In this section, we briefly outline the workflow of our method referred to as Multi-modal Dense Video Captioning (MDVC) which is shown in Fig. FIGREF5. The goal of our method is to temporally localize events on a video and to produce a textual description for each of them. To this end, we apply a two-stage approach.", "Firstly, we obtain the temporal event locations. For this task, we employ the Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5. Bi-SST applies 3D Convolution network (C3D) BIBREF44 to video frames and extracts features that are passed to subsequent bi-directional LSTM BIBREF45 network. The LSTM accumulates visual cues over time and predicts confidence scores for each location to be start/end point of an event. Finally, a set of event proposals (start/end times) is obtained and passed to the second stage for caption generation.", "Secondly, we generate the captions given a proposal. To produce inputs from audio, visual, and speech modalities, we use Inflated 3D convolutions (I3D) BIBREF46 for visual and VGGish network BIBREF47 for audio modalities. For speech representation as a text, we employ an external ASR system BIBREF11. To represent the text into a numerical form, we use a similar text embedding which is used for caption encoding. The features are, then, fed to individual transformer models along with the words of a caption from the previous time steps. The output of the transformer is passed into a generator which fuses the outputs from all modalities and estimates a probability distribution over the word vocabulary. After sampling the next word, the process is repeated until a special end token is obtained. Fig. FIGREF1 illustrates an example modality and the corresponding event captions." ], [ "An event localization module is dedicated to generating a set of temporal regions which might contain an event. To achieve this, we employ pre-trained Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5 as it has is been shown to reach good performance in the proposal generation task.", "Bi-SST inputs a sequence of $T$ RGB frames from a video $V = (x_1, x_2, \\dots , x_F)$ and extracts a set of 4096-d features $V^{\\prime } = (f_1, f_2, \\dots , f_T)$ by applying a 3D Convolution network (C3D) on non-overlapping segments of size 16 with a stride of 64 frames. To reduce the feature dimension, only 500 principal components were selected using PCA.", "To account for the video context, events are proposed during forward and backward passes on a video sequence $V^{\\prime }$, and, then, the resulting scores are fused together to obtain the final proposal set. Specifically, during the forward pass, LSTM is used to accumulate the visual clues from the “past” context at each position $t$ which is treated as an ending point and produce confidence scores for each proposal.", "Afterwards, a similar procedure is performed during the backward pass where the features $V^{\\prime }$ are used in a reversed order. This empowers the model to have a sense of the “future” context in a video. In contrast to the forward pass, each position is treated as a starting point of the proposal. Finally, the confidence scores from both passes are fused by multiplication of corresponding scores for each proposal at each time step, and, then, filtered according to a predefined threshold.", "Finally, we obtain a set of $N_V$ event proposals for caption generation $P_V=\\lbrace p_j = (\\text{start}_j, \\text{end}_j, \\text{score}_j)\\rbrace _{j=1}^{N_V}$." ], [ "In this section we explain the captioning based for an example modality, namely, visual. Given a video $V$ and a set of proposals $P_V$ from the event localization module, the task of the captioning module is to provide a caption for each proposal in $P_V$. In order to extract features from a video $V$, we employ I3D network BIBREF46 pre-trained on the Kinetics dataset which produces 1024-d features. The gap between the extracted features and the generated captions is filled with Transformer BIBREF3 architecture which was proven to effectively encode and decode the information in a sequence-to-sequence setting." ], [ "As shown in Fig. FIGREF6, Feature Transformer architecture mainly consists of three blocks: an encoder, decoder, and generator. The encoder inputs a set of extracted features $ \\mathbf {v}^j = (v_1, v_2, \\dots , v_{T_j}) $ temporally corresponding to a proposal $p_j$ from $P_V$ and maps it to a sequence of internal representations $ \\mathbf {z}^j = (z_1, z_2, \\dots , z_{T_j}) $. The decoder is conditioned on the output of the encoder $\\mathbf {z}^j$ and the embedding $ \\mathbf {e}^j_{\\leqslant t} = (e_1, e_2, \\dots , e_t)$ of the words in a caption $ \\mathbf {w}^j_{\\leqslant t} = (w_1, w_2, \\dots , w_t) $. It produces the representation $ \\mathbf {g}^j_{\\leqslant t} = (g_1, g_2, \\dots , g_t) $ which, in turn, is used by the generator to model a distribution over a vocabulary for the next word $ p(w_{t+1}|\\mathbf {g}^j_{\\leqslant t}) $. The next word is selected greedily by obtaining the word with the highest probability until a special ending token is sampled. The captioning is initialized with a starting token. Both are added to the vocabulary.", "Before providing an overview of the encoder, decoder, and generator, we presenting the notion of multi-headed attention that acts as an essential part of the decoder and encoder blocks. The concept of the multi-head attention, in turn, heavily relies on dot-product attention which we describe next." ], [ "The idea of the multi-headed attention rests on the scaled dot-product attention which calculates the weighted sum of values. The weights are obtained by applying the softmax function on the dot-product of each pair of rows of queries and keys scaled by $\\frac{1}{\\sqrt{D_k}}$. The scaling is done to prevent the softmax function from being in the small gradient regions BIBREF3. Formally the scaled dot-product attention can be represented as follows", "where $Q, K, V $ are queries, keys, and values, respectively." ], [ "The multi-headed attention block is used once in each encoder layer and twice in each decoder layer. The block consists of $H$ heads that allows to cooperatively account for information from several representations sub-spaces at every position while preserving the same computation complexity BIBREF3. In a transformer with dimension $D_T$, each head is defined in the following way", "where $q, k, v$ are matrices which have $D_T$ columns and the number of rows depending on the position of the multi-headed block, yet with the same number of rows for $k$ and $v$ to make the calculation in (DISPLAY_FORM11) to be feasible. The $W^{q}_h, W^{k}_h, W^{v}_h \\in \\mathbb {R}^{D_T \\times D_k}$ are trainable projection matrices that map $q, k , v$ from $D_T$ into $D_k= \\frac{D_T}{H}$, asserting $D_T$ is a multiple of $H$. The multi-head attention, in turn, is the concatenation of all attention heads mapped back into $D_T$ by trainable parameter matrix $W^o \\in \\mathbb {R}^{D_k \\cdot H \\times D_T}$:" ], [ "The encoder consists of $ L $ layers. The first layer inputs a set of features $ \\mathbf {v}^j $ and outputs an internal representation $ \\mathbf {z}_1^j \\in \\mathbb {R}^{T_j \\times D_T} $ while each of the next layers treats the output of a previous layer as its input. Each encoder layer $l$ consist of two sub-layers: multi-headed attention and position-wise fully connected network which are explained later in this section. The input to both sub-layers are normalized using layer normalization BIBREF48, each sub-layer is surrounded by a residual connection BIBREF49 (see Fig. FIGREF6). Formally, the $l$-th encoder layer has the following definition", "where $\\text{FCN}$ is the position-wise fully connected network. Note, the multi-headed attention has identical queries, keys, and values ($ \\overline{\\mathbf {z}}_l^j $). Such multi-headed attention block is also referred to as self-multi-headed attention. It enables an encoder layer $l$ to account for the information from all states from the previous layer $ \\mathbf {z}_{l-1}^j$. This property contrasts with the idea of RNN which accumulates only the information from the past positions." ], [ "Similarly to the encoder, the decoder has $ L $ layers. At a position $t$, the decoder inputs a set of embedded words $\\mathbf {e}^j_{\\leqslant t}$ with the output of the encoder $\\mathbf {z}^j$ and sends the output to the next layer which is conditioned on this output and, again, the encoder output $\\mathbf {z}^j$. Eventually, the decoder producing its internal representation $\\mathbf {g}_{\\leqslant t}^j \\in \\mathbb {R}^{t \\times D_T}$. The decoder block is similar to the encoder but has an additional sub-layer that applies multi-headed attention on the encoder output and the output of its previous sub-layer. The decoder employs the layer normalization and residual connections at all three sub-layers in the same fashion as the encoder. Specifically, the $l$-th decoder layer has the following form:", "where $ \\mathbf {z}^j $ is the encoder output. Note, similarly to the encoder, (DISPLAY_FORM18) is a self-multi-headed attention function while the second multi-headed attention block attends on both the encoder and decoder and is also referred to as encoder-decoder attention. This block enables each layer of the decoder to attend all state of the encoder's output $ \\mathbf {z}^j$." ], [ "The fully connected network is used in each layer of the encoder and the decoder. It is a simple two-layer neural network that inputs $x$ with the output of the multi-head attention block, and, then, projects each row (or position) of the input $x$ from $D_T$ space onto $D_P$, $(D_P > D_T)$ and back, formally:", "where $W_1 \\in \\mathbb {R}^{D_T \\times D_P}$, $W_2 \\in \\mathbb {R}^{D_P \\times D_T}$, and biases $b_1, b_2$ are trainable parameters, $\\text{ReLU}$ is a rectified linear unit." ], [ "At the position $t$, the generator consumes the output of the decoder $\\mathbf {g}^j_{\\leqslant t}$ and produces a distribution over the vocabulary of words $p(w_{t+1}| \\mathbf {g}^j_{\\leqslant t})$. To obtain the distribution, the generator applies the softmax function of the output of a fully connected layer with a weight matrix $W_G \\in \\mathbb {R}^{D_T \\times D_V}$ where $D_V$ is a vocabulary size. The word with the highest probability is selected as the next one." ], [ "Since the representation of textual data is usually sparse due to a large vocabulary, the dimension of the input of a neural language model is reduced with an embedding into a dimension of a different size, namely $D_T$. Also, following BIBREF3, we multiply the embedding weights by $\\sqrt{D_T}$. The position encoding is required to allow the transformer to have a sense of the order in an input sequence. We adopt the approach proposed for a transformer architecture, i. e. we add the output of the combination of sine and cosine functions to the embedded input sequence BIBREF3." ], [ "In this section, we present the multi-modal dense video captioning module which, utilises visual, audio, and speech modalities. See Fig. FIGREF6 for a schematic representation of the module.", "For the sake of speech representation $\\mathbf {s}^j = (s_1, s_2, \\dots , s_{T_j^s})$, we use the text embedding of size 512-d that is similar to the one which is employed in the embedding of a caption $\\mathbf {w}^j_{\\leqslant t}$. To account for the audio information, given a proposal $p_j$ we extract a set of features $\\mathbf {a}_j = (a_1, a_2, \\dots , a_{T_j^a})$ applying the 128-d embedding layer of the pre-trained VGGish network BIBREF47 on an audio track. While the visual features $\\mathbf {v}^j = (v_1, v_2, \\dots v_{T_j^v}) $ are encoded with 1024-d vectors by Inflated 3D (I3D) convolutional network BIBREF46.", "To fuse the features, we create an encoder and a decoder for each modality with dimensions corresponding to the size of the extracted features. The outputs from all decoders are fused inside of the generator, and the distribution of a next word $w_{t+1}$ is formed.", "In our experimentation, we found that a simple two-layer fully-connected network applied of a matrix of concatenated features performs the best with the ReLU activation after the first layer and the softmax after the second one. Each layer of the network has a matrix of trainable weights: $W_{F_1} \\in \\mathbb {R}^{D_F \\times D_V}$ and $W_{F_2} \\in \\mathbb {R}^{D_V \\times D_V}$ with $D_F = 512 + 128 + 1024 $ and $D_V$ is a vocabulary size." ], [ "As the training is conducted using mini-batches of size 28, the features in one modality must be of the same length so the features could be stacked into a tensor. In this regard, we pad the features and the embedded captions to match the size of the longest sample.", "The model is trained by optimizing the Kullback–Leibler divergence loss which measures the “distance” between the ground truth and predicted distributions and averages the values for all words in a batch ignoring the masked tokens.", "Since many words in the English language may have several synonyms or human annotation may contain mistakes, we undergo the model to be less certain about the predictions and apply Label Smoothing BIBREF50 with the smoothing parameter $\\gamma $ on the ground truth labels to mitigate this. In particular, the ground truth distribution over the vocabulary of size $D_V$, which is usually represented as one-hot encoding vector, the identity is replaced with probability $1-\\gamma $ while the rest of the values are filled with $\\frac{\\gamma }{D_V-1}$.", "During training, we exploit the teacher forcing technique which uses the ground truth sequence up to position $t$ as the input to predict the next word instead of using the sequence of predictions. As we input the whole ground truth sequence at once and predicting the next words at each position, we need to prevent the transformer from peeping for the information from the next positions as it attends to all positions of the input. To mitigate this, we apply masking inside of the self-multi-headed attention block in the decoder for each position higher than $t-1$, following BIBREF3.", "The details on the feature extraction and other implementation details are available in the supplementary materials." ], [ "We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set).", "The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles." ], [ "We are evaluating the performance of our model using BLEU@N BIBREF51 and METEOR BIBREF52. We regard the METEOR as our primary metric as it has been shown to be highly correlated with human judgement in a situation with a limited number of references (only one, in our case).", "We employ the official evaluation script provided in BIBREF53. Thus, the metrics are calculated if a proposed event and a ground truth location of a caption overlaps more than a specified temporal Intersection over Union (tIoU) and zero otherwise. All metric values are averaged for every video, and, then, for every threshold tIoU in $[0.3, 0.5, 0.7, 0.9]$. On the validation, we average the resulting scores for both validation sets. For the learned proposal setting, we report our results on at most 100 proposals per video.", "Notably, up to early 2017, the evaluation code had an issue which previously overestimated the performance of the algorithms in the learned proposal setting BIBREF9. Therefore, we report the results using the new evaluation code." ], [ "We compare our method with five related approaches, namely Krishna BIBREF2, Wang BIBREF5, Zhou BIBREF4, Li BIBREF6, and Rahman BIBREF38. We take the performance values from the original papers, except for BIBREF6, and BIBREF4, which are taken from BIBREF9 due to the evaluation issue (see Sec. SECREF27).", "The lack of access to the full ActivityNet Captions dataset makes strictly fair comparison difficult as we have less training and validation videos. Nevertheless, we present our results in two set-ups: 1) full validation set with random input features for missing entries, and 2) videos with all three modalities present (video, audio, and speech). The first one is chosen to indicate the lower bound of our performance with the full dataset. Whereas, the second one (referred to as “no missings”) concentrates on the multi-modal setup, which is the main contribution of our work.", "The obtained results are presented in Tab. TABREF25. Our method (MDVC) achieves comparable or better performance, even though we have access to smaller training set and 9 % of the validation videos are missing (replaced with random input features). Furthermore, if all three modalities are present, our method outperforms all baseline approaches in the case of both GT and learned proposals. Notably, we outperform BIBREF4 which is also based on the transformer architecture and account for the optical flow. This shows the superior performance of our captioning module which, yet, trained on the smaller amount of data." ], [ "In this section, we perform an ablation analysis highlighting the effect of different design choices of our method. For all experiments, we use the full unfiltered ActivityNet Captions validation set with ground truth event proposals.", "Firstly, we assess the selection of the model architecture. To this end, we implemented a version of our method where the transformer was replaced by Bidirectional Recurrent Neural Network with Gated Recurrent Units with attention (Bi-GRU), proposed in BIBREF54. To distil the effect of the change in architecture, the results are shown for visual-only models. Both Bi-GRU and the transformer input I3D features extracted from 64 RGB and optical flow frames (the final model inputs 24 frames). Finally, we set a lower bound for the feature performance by training a transformer model with random video features. Tab. TABREF32 shows the comparison. To conclude, we observe that the feature transformer-based model is not only lighter but also achieves better performance in dense video captioning task. Moreover, both method clearly surpasses the random baseline.", "Secondly, we evaluate the contribution of different modalities in our framework. Tab. TABREF33 contains the results for different modality configurations as well as for two feature fusion approaches. Specifically, averaging of the output probabilities and concatenation of the outputs of all modalities and applying two fully connected (FC) layers on top. We observe that audio-only model has the worst performance, followed by the visual only model, and the combination of these two. Moreover, the concatenation and FC layers result in better performance than averaging. To further assess if the performance gain is due to the additional modalities or to the extra capacity in the FC layers, we trained a visual-only model with two additional FC layers. The results indicate that such configuration performs worse than any bi-modal setup. Overall, we conclude that the final model with all three modalities performs best among all tested set-ups, which highlights the importance of multi-modal setting in dense video captioning task.", "Fig. FIGREF29 shows a qualitative comparison between different models in our ablation study. Moreover, we provide the corresponding captions from the best performing baseline method (Zhuo BIBREF4). We noticed the following pattern: the audio-modality produces coherent sentences and captures the concepts of speaking in the video. However, there are clear mistakes in the caption content. In contrast, the model with all three modalities manages to capture the man who speaks to the camera which is also present in the ground truth. Both visual-only MDVC and Zhuo struggle to describe the audio details.", "Finally, to test whether our model improves the performance in general rather than in a specific video category, we report the comparison of the different versions of MDVC per category. To this end, we retrieve the category labels from the YouTubeAPI BIBREF12 (US region) for every available ActivityNet Captions validation video. These labels are given by the user when uploading the video and roughly represent the video content type. The comparison is shown in Fig. FIGREF31. The results imply a consistent gain in performance within each category except for categories: “Film & Animation” and “Travel & Events” which might be explained by the lack of correspondence between visual and audio tracks. Specifically, the video might be accompanied by music, e. g. promotion of a resort. Also, “Film & Animation” contains cartoon-like movies which might have a realistic soundtrack while the visual track is goofy." ], [ "The use of different modalities in computer vision is still an underrepresented topic and, we believe, deserves more attention. In this work, we introduced a multi-modal dense video captioning module (MDVC) and shown the importance of the audio and speech modalities for dense video captioning task. Specifically, MDVC is based on the transformer architecture which encodes the feature representation of each modality for a specific event proposal and produces a caption using the information from these modalities. The experimentation, conducted employing the ActivityNet Captions dataset, shows the superior performance of a captioning module to the visual-only models in the existing literature. Extensive ablation study verifies this conclusion. We believe that our results firmly indicate that future works in video captioning should utilize a multi-modal input." ], [ "The supplementary material consists of four sections. In Section SECREF35, we provide qualitative results of the MDVC on another example video. The details on features extraction and implementation are described in Section SECREF36 and SECREF38. Finally, the comparison with other methods is shown in Section SECREF39." ], [ "In Figure FIGREF34, we provide qualitative analysis of captioning on another video from ActivityNet Captions validation set to emphasize the importance of additional modalities for dense video captioning, namely, speech and audio. We compare the captioning proposed by MDVC (our model) conditioned on different sets of modalities: audio-only (A-only), visual-only (V-only), and including all modalities (S + A + V). Additionally, we provide the results of a captioning model proposed in Zhou BIBREF4 (visual only) which showed the most promising results according to METEOR.", "More precisely, the video (YouTube video id: EGrXaq213Oc) lasts two minutes and contains 12 human annotations. The video is an advertisement for snowboarding lessons for children. It shows examples of children successfully riding a snowboard on a hill and supportive adults that help them to learn. A lady narrates the video and appears in the shot a couple of times.", "Generally, we may observe that MDVC with the audio modality alone (A-only) mostly describes that a woman is speaking which is correct according to the audio content yet the details about snowboarding and children are missing. This is expectedly challenging for the network as no related sound effects to snowboarding are present. In the meantime, the visual-only MDVC grasps the content well, however, misses important details like the gender of the speaker. While the multi-modal model MDVC borrows the advantages of both which results in more accurate captions. The benefits of several modalities stand out in captions for $p_2$ and $p_{10}$ segments. Note that despite the appearance of the lady in the shot during $p_{10}$, the ground truth caption misses it yet our model manages to grasp it.", "Yet, some limitations of the final model could be noticed as well. In particular, the content of some proposals is dissimilar to the generated captions, e. g. the color of the jacket ($p_4$, $p_5$), or when a lady is holding a snowboard with a child on it while the model predicts that she is holding a ski ($p_7$). Also, the impressive tricks on a snowboard were guessed simply as “ridding down a hill” which is not completely erroneous but still inaccurate ($p_8$). Overall, the model makes reasonable mistakes except for proposals $p_3$ and $p_4$. Finally, the generated captions provide more general description of a scene compared to the ground truth that is detailed and specific which could be a subject for future investigation." ], [ "Before training, we pre-calculate the features for both audio and visual modalities. In particular, the audio features were extracted using VGGish BIBREF47 which was trained on AudioSet BIBREF55. The input to the VGGish model is a $96\\times 64$ log mel-scaled spectrogram extracted for non-overlapping $0.96$ seconds segments. The log mel-scaled spectrogram is obtained by applying Short-Time Fourier Transform on a 16 kHz mono audio track using a periodic Hann window with 25 ms length with 10 ms overlap. The output is a 128-d feature vector after an activation function and extracted before a classification layer. Therefore, the input to MDVC is a matrix with dimension $T_j^a \\times 128$ where $T_j^a$ is the number of features proposal $p_j$ consists of.", "The visual features were extracted using I3D BIBREF46 network which inputs a set of 24 RGB and optical flow frames extracted at 25 fps. The optical flow is extracted with PWC-Net BIBREF58. First, each frame is resized such that the shortest side is 256 pixels. Then, the center region is cropped to obtain $224\\times 224$ frames. Both RGB and flow stacks are passed through the corresponding branch of I3D. The output of each branch are summed together producing 1024-d features for each stack of 24 frames. Hence, the resulting matrix has the shape: $T_j^v\\times 1024$, where $T_j^v$ is the number of features required for a proposal $p_j$.", "We use 24 frames for I3D input to temporally match with the input of the audio modality as $\\frac{24}{25} = 0.96$. Also note that I3D was pre-trained on the Kinetics dataset with inputs of 64 frames, while we use 24 frames. This is a valid approach since we employ the output of the second to the last layer after activation and average it on the temporal axis.", "The input for speech modality is represented by temporally allocated text segments in the English language (one could think of them as subtitles). For a proposal $ p_j $, we pick all segments that both: a) end after the proposal starting point, and b) start before the proposal ending point. This provides us with sufficient coverage of what has been said during the proposal segment. Similarly to captions, each word in a speech segment is represented as a number which corresponds to the word's order number in the vocabulary and then passed through the text embedding of size 512. We omit the subtitles that describe the sound like “[Applause]” and “[Music]” as we are only interested in the effect of the speech. Therefore, the speech transformer encoder inputs matrices of shape: $T^s_j\\times 512$ where $T^s_j$ is the number of words in corresponding speech for proposal $p_j$." ], [ "Since no intermediate layers connecting the features and transformers are used, the dimension of the features transformers $D_T$ corresponds to the size of the extracted features: 512, 128, and 1024 for speech, audio, and visual modalities, respectively. Each feature transformer has one layer ($L$), while the internal layer in the position-wise fully-connected network has $D_P=2048$ units for all modality transformers which was found to perform optimally. We use $H=4$ heads in all multi-headed attention blocks. The captions and speech vocabulary sizes are 10,172 and 23,043, respectively.", "In all experiments, except for the audio-only model, we use Adam optimizer BIBREF56, a batch containing features for 28 proposals, learning rate $10^{-5}$, $\\beta = (0.9, 0.99)$, smoothing parameter $\\gamma = 0.7$. In the audio-only model, we apply two-layered transformer architecture with learning rate $10^{-4}$ and $\\gamma = 0.2$. To regularize the weights of the model, in every experiment, Dropout BIBREF57 with $p = 0.1$ is applied to the outputs of positional encoding, in every sub-layer before adding a residual, and after the first internal layer of the multi-modal generator.", "During the experimentation, models were trained for 200 epochs at most and stopped the training early if for 50 consecutive epochs the average METEOR score calculated on ground truth event proposals of both validation sets has not improved. At the end of the training, we employ the best model to estimate its performance on the learned temporal proposals. Usually the training for the best models culminated by 50th epoch, e. g. the final model (MDVC (S + A + V)) was trained for 30 epochs which took, roughly, 15 hours on one consumer-type GPU (Nvidia GeForce RTX 2080 Ti). The code for training heavily relies on PyTorch framework and will be released upon publication." ], [ "In Table TABREF37, we present a comparison with another body of methods BIBREF8, BIBREF9 which were not included in the main comparison as they were using Reinforcement Learning (RL) approach to directly optimize the non-differentiable metric (METEOR). We believe that our method could also benefit from these as the ablation studies BIBREF8, BIBREF9 show significant gains obtained by applying them. As it was anticipated, in general, methods which employ reinforcement learning perform better in terms of METEOR. Interestingly, our model still outperforms BIBREF8 which uses RL in the captioning module." ] ], "section_name": [ "Introduction", "Related Work ::: Video Captioning", "Related Work ::: Dense Video Captioning", "Related Work ::: Multi-modal Dense Video Captioning", "Proposed Framework", "Proposed Framework ::: Temporal Event Localization Module", "Proposed Framework ::: Captioning Module", "Proposed Framework ::: Captioning Module ::: Feature Transformer", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Dot-product Attention", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Multi-headed Attention", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Encoder", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Decoder", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Position-wise Fully-Connected Network", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Generator", "Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Input Embedding and Positional Encoding", "Proposed Framework ::: Captioning Module ::: Multi-modal Dense Video Captioning", "Proposed Framework ::: Model Training", "Experiments ::: Dataset", "Experiments ::: Metrics", "Experiments ::: Comparison with Baseline Methods", "Experiments ::: Ablation Studies", "Conclusion", "Supplementary Material", "Supplementary Material ::: Qualitative Results (Another Example)", "Supplementary Material ::: Details on Feature Extraction", "Supplementary Material ::: Implementation Details", "Supplementary Material ::: Comparison with Other Methods" ] }
{ "answers": [ { "annotation_id": [ "b5db9e6889d00da544fc266e280bf4a20a1560dd" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 5. The results are split for category and version of MDVC. The number of samples per category is given in parenthesis. The METEOR axis is cut up to the random performance level (7.16)." ], "extractive_spans": [], "free_form_answer": "14 categories", "highlighted_evidence": [ "FLOAT SELECTED: Figure 5. The results are split for category and version of MDVC. The number of samples per category is given in parenthesis. The METEOR axis is cut up to the random performance level (7.16)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "4de197483d9333719a50c9b56f623c118db96724" ], "answer": [ { "evidence": [ "We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set)." ], "extractive_spans": [], "free_form_answer": "YouTube videos", "highlighted_evidence": [ "We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "annotation_id": [ "75a46ec0c373752ee9c8cf0d4c3e488d9ca99e95" ], "answer": [ { "evidence": [ "The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles." ], "extractive_spans": [ "YouTube ASR system " ], "free_form_answer": "", "highlighted_evidence": [ "The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "annotation_id": [ "eade262d86190352e07c2d3d77c5f5aca2cac106" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] } ], "nlp_background": [ "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How many category tags are considered?", "What domain does the dataset fall into?", "What ASR system do they use?", "What is the state of the art?" ], "question_id": [ "ead5dc1f3994b2031a1852ecc4f97ac5760ea977", "86cd1228374721db67c0653f2052b1ada6009641", "7011b26ffc54769897e4859e4932aeddfab82c9f", "3a6559dc6eba7f5abddf3ac27376ba0b9643a908" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1. Example video with ground truth captions and predictions of Multi-modal Dense Video Captioning module (MDVC). It may account for any number of modalities, i. e. audio or speech.", "Figure 2. The proposed Multi-modal Dense Video Captioning (MDVC) framework. Given an input consisting of several modalities, namely, audio, speech, and visual, internal representations are produced by a corresponding feature transformer (middle). Then, the features are fused in the multi-modal generator (right) that outputs the distribution over the vocabulary.", "Figure 3. The proposed feature transformation architecture that consists of an encoder (bottom part) and a decoder (top part). The encoder inputs pre-processed and position-encoded features from I3D (in case of the visual modality), and outputs an internal representation. The decoder, in turn, is conditioned on both position-encoded caption that is generated so far and the output of the encoder. Finally, the decoder outputs its internal representation.", "Table 1. The results of the dense video captioning task on the ActivityNet Captions validation sets in terms of BLEU–3,4 (B@3, B@4) and METEOR (M). The related methods are compared with the proposed approach (MDVC) in two settings: on the full validation dataset and a part of it with the videos with all modalities present for a fair comparison (“no missings”). Methods are additionally split into the ones which “saw” all training videos and another ones which trained on partially available data. The results are presented for both ground truth (GT) and learned proposals.", "Table 2. Comparison of the Feature Transformer and the Bidirectional GRU (Bi-GRU) architectures in terms of BLEU-4 (B@4), METEOR (M), and a number of model parameters. The input to all models is visual modality (I3D). The results indicate the superior performance of the Feature Transformer on all metrics. Additionally, we report the random input baseline which acts as a lower performance bound. The best results are highlighted", "Table 3. The performance of the proposed MDVC framework with different input modalities (V-visual, A-audio, S-speech) and feature fusion approaches: probability averaging and concatenation of two fully-connected layers (Concat. + 2 FC). Also, we report the comparison between audio-visual MDVC with visual-only MDVC with similar model capacities (2 FC).", "Figure 4. The qualitative captioning results for an example video from the ActivityNet Captions validation set. In the video, the speaker describes the advantages of rafting on this particular river and their club. Occasionally, people are shown rapturously speaking about how fun it is. Models that account for audio modality tend to grasp the details of the speaking on the scene while the visual-only models fail at this. We invite the reader to watch the example YouTube video for a better impression (xs5imfBbWmw).", "Figure 5. The results are split for category and version of MDVC. The number of samples per category is given in parenthesis. The METEOR axis is cut up to the random performance level (7.16).", "Table 4. The comparison with other dense video captioning methods on ActivityNet Captions validation set estimated with METEOR. The results are presented for the learned proposals.", "Figure 6. Another example of the qualitative results for a video in the validation set. In the video, a lady is shown speaking twice (in p2 and p10). Since MDVC is conditioned not only on visual (V) but also speech (S) and audio (A) modalities, it managed to hallucinate a caption containing a “woman” instead of a “man”. We invite a reader to watch it on YouTube for a better impression (EGrXaq213Oc). Note: the frame size mimics the MDVC input; the scale of temporal segments is not precise. Best viewed in color." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Figure4-1.png", "8-Figure5-1.png", "12-Table4-1.png", "13-Figure6-1.png" ] }
[ "How many category tags are considered?", "What domain does the dataset fall into?" ]
[ [ "2003.07758-8-Figure5-1.png" ], [ "2003.07758-Experiments ::: Dataset-0" ] ]
[ "14 categories", "YouTube videos" ]
565
1906.09774
Emotionally-Aware Chatbots: A Survey
Textual conversational agent or chatbots' development gather tremendous traction from both academia and industries in recent years. Nowadays, chatbots are widely used as an agent to communicate with a human in some services such as booking assistant, customer service, and also a personal partner. The biggest challenge in building chatbot is to build a humanizing machine to improve user engagement. Some studies show that emotion is an important aspect to humanize machine, including chatbot. In this paper, we will provide a systematic review of approaches in building an emotionally-aware chatbot (EAC). As far as our knowledge, there is still no work focusing on this area. We propose three research question regarding EAC studies. We start with the history and evolution of EAC, then several approaches to build EAC by previous studies, and some available resources in building EAC. Based on our investigation, we found that in the early development, EAC exploits a simple rule-based approach while now most of EAC use neural-based approach. We also notice that most of EAC contain emotion classifier in their architecture, which utilize several available affective resources. We also predict that the development of EAC will continue to gain more and more attention from scholars, noted by some recent studies propose new datasets for building EAC in various languages.
{ "paragraphs": [ [ "Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 . Other works design a multi-purpose agents such as SIRI, Amazon Alexa, and Google Assistance. This domain is a well-researched area in Human-Computer Interaction research community but still become a hot topic now. The main development focus right now is to have an intelligent and humanizing machine to have a better engagement when communicating with human BIBREF5 . Having a better engagement will lead to higher user satisfaction, which becomes the main objective from the industry perspective.", "In this study, we will only focus on textual conversational agent or chatbot, a conversational artificial intelligence which can conduct a textual communication with a human by exploiting several natural language processing techniques. There are several approaches used to build a chatbot, start by using a simple rule-based approach BIBREF6 , BIBREF7 until more sophisticated one by using neural-based technique BIBREF8 , BIBREF9 . Nowadays, chatbots are mostly used as customer service such as booking systems BIBREF10 , BIBREF11 , shopping assistance BIBREF3 or just as conversational partner such as Endurance and Insomnobot . Therefore, there is a significant urgency to humanize chatbot for having a better user-engagement. Some works were already proposed several approaches to improve chatbot's user-engagement, such as building a context-aware chatbot BIBREF12 and injecting personality into the machine BIBREF13 . Other works also try to incorporate affective computing to build emotionally-aware chatbots BIBREF2 , BIBREF14 , BIBREF15 .", "Some existing studies shows that adding emotion information into dialogue systems is able to improve user-satisfaction BIBREF16 , BIBREF17 . Emotion information contribute to a more positive interaction between machine and human, which lead to reduce miscommunication BIBREF18 . Some previous studies also found that using affect information can help chatbot to understand users' emotional state, in order to generate better response BIBREF19 . Not only emotion, another study also introduce the use of tones to improve satisfactory service. For instance, using empathetic tone is able to reduces user stress and results in more engagement. BIBREF2 found that tones is an important aspect in building customer care chatbot. They discover eight different tones including anxious, frustrated, impolite, passionate, polite, sad, satisfied, and empathetic.", "In this paper, we will try to summarize some previous studies which focus on injecting emotion information into chatbots, on discovering recent issues and barriers in building engaging emotionally-aware chatbots. Therefore, we propose some research questions to have a better problem definition:", "This paper will be organized as follows: Section 2 introduces the history of the relation between affective information with chatbots. Section 3 outline some works which try to inject affective information into chatbots. Section 4 summarizes some affective resources which can be utilized to provide affective information. Then, Section 5 describes some evaluation metric that already applied in some previous works related to emotionally-aware chatbots. Last Section 6 will conclude the rest of the paper and provide a prediction of future development in this research direction based on our analysis." ], [ "The early development of chatbot was inspired by Turing test in 1950 BIBREF20 . Eliza was the first publicly known chatbot, built by using simple hand-crafted script BIBREF21 . Parry BIBREF22 was another chatbot which successfully passed the Turing test. Similar to Eliza, Parry still uses a rule-based approach but with a better understanding, including the mental model that can stimulate emotion. Therefore, Parry is the first chatbot which involving emotion in its development. Also, worth to be mentioned is ALICE (Artificial Linguistic Internet Computer Entity), a customizable chatbot by using Artificial Intelligence Markup Language (AIML). Therefore, ALICE still also use a rule-based approach by executing a pattern-matcher recursively to obtain the response. Then in May 2014, Microsoft introduced XiaoIce BIBREF23 , an empathetic social chatbot which is able to recognize users' emotional needs. XiaoIce can provide an engaging interpersonal communication by giving encouragement or other affective messages, so that can hold human attention during communication.", "Nowadays, most of chatbots technologies were built by using neural-based approach. Emotional Chatting Machine (ECM) BIBREF15 was the first works which exploiting deep learning approach in building a large-scale emotionally-aware conversational bot. Then several studies were proposed to deal with this research area by introducing emotion embedding representation BIBREF24 , BIBREF25 , BIBREF26 or modeling as reinforcement learning problem BIBREF27 , BIBREF28 . Most of these studies used encoder-decoder architecture, specifically sequence to sequence (seq2seq) learning. Some works also tried to introduce a new dataset in order to have a better gold standard and improve system performance. BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Their dataset was used to develop a chatbot which captures human emotional states and elicits positive emotion during the conversation." ], [ "As we mentioned before that emotion is an essential aspect of building humanize chatbot. The rise of the emotionally-aware chatbot is started by Parry BIBREF22 in early 1975. Now, most of EAC development exploits neural-based model. In this section, we will try to review previous works which focus on EAC development. Table TABREF10 summarizes this information includes the objective and exploited approach of each work. In early development, EAC is designed by using a rule-based approach. However, in recent years mostly EAC exploit neural-based approach. Studies in EAC development become a hot topic start from 2017, noted by the first shared task in Emotion Generation Challenge on NLPCC 2017 BIBREF31 . Based on Table TABREF10 this research line continues to gain massive attention from scholars in the latest years.", "Based on Table TABREF10 , we can see that most of all recent EAC was built by using encoder-decoder architecture with sequence-to-sequence learning. These seq2seq learning models maximize the likelihood of response and are prepared to incorporate rich data to generate an appropriate answer. Basic seq2seq architecture structured of two recurrent neural networks (RNNs), one as an encoder processing the input and one as a decoder generating the response. long short term memory (LSTM) or gated recurrent unit (GRU) was the most dominant variant of RNNs which used to learn the conversational dataset in these models. Some studies also tried to model this task as a reinforcement learning task, in order to get more generic responses and let the chatbot able to achieve successful long-term conversation. Attention mechanism was also introduced in this report. This mechanism will allow the decoder to focus only on some important parts in the input at every decoding step.", "Another vital part of building EAC is emotion classifier to detect emotion contained in the text to produce a more meaningful response. Emotion detection is a well-established task in natural language processing research area. This task was promoted in two latest series of SemEval-2018 (Task 1) and SemEval-2019 (Task 3). Some tasks were focusing on classifying utterance into several categories of emotion BIBREF32 . However, there is also a task which trying to predict the emotion intensities contained in the text BIBREF33 . In the early development of emotion classifier, most of the studies proposed to use traditional machine-learning approach. However, the neural-based approach is able to gain better performance, which leads more scholars to exploit it to deal with this task. In chatbot, the system will generate several responses based on several emotion categories. Then the system will respond with the most appropriate emotion based on emotion detected on posted utterance by emotion classifier. Based on Table TABREF10 , studies have different emotion categories based on their focus and objective in building chatbots." ], [ "In this section, we try to investigate the available resources in building EAC. As other artificial intelligent agents, building chatbot also needs a dataset to be learned, to be able to produce a meaningful conversation as a human-like agent. Therefore, some studies propose dataset which contains textual conversation annotated by different emotion categories. Table TABREF12 summarizes the available dataset that found in recent years. We categorize the dataset based on the language, source of the data, and further description which contains some information such as annotation approach, size of instances, and emotion labels. All datasets were proposed during 2017 and 2018, started by dataset provided by NLPCC 2017 Shared Task on Emotion Generation Challenge organizers. This dataset gathered from Sina Weibo social media , so it consists of social conversation in Chinese. Based on our study, all of the datasets that we discover are only available in two languages, English and Chinese. However, the source of these datasets is very diverse such as social media (Twitter, Sina Weibo, and Facebook Message), online content, and human writing through crowdsourcing scenario. Our investigation found that every dataset use different set of emotion label depends on its focus and objective in building the chatbot.", "As we mentioned in the previous section, that emotion classifier is also an integral part of the emotionally-aware chatbot. In building emotion classifier, there are also available several affective resources, which already widely used in the emotion classification task. Table TABREF16 shows the available affective resources we discovered, range from old resources such as LIWC, ANEW, and DAL, until the modern version such as DepecheMood and EmoWordNet. Based on some prior studies, emotion can be classified into two fundamental viewpoints, including discrete categories and dimensional models. Meanwhile, this feature view emotion as a discrete category which differentiates emotion into several primary emotion. The most popular was proposed by BIBREF37 that differentiate emotion into six categories including anger, disgust, fear, happiness, sadness, and surprise. Three different resources will be used to get emotion contained in the tweet, including EmoLex, EmoSenticNet, LIWC, DepecheMood, and EmoWordNet. Dimensional model is another viewpoint of emotion that define emotion according to one or more dimension. Based on the dimensional approach, emotion is a coincidence value on some different strategic dimensions. We will use two various lexical resources to get the information about the emotional dimension of each tweet, including Dictionary of Affect in Language (DAL), ANEW, and NRC VAD." ], [ "We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number." ], [ "Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals. Here we will explain every focus based on several categories and quality attributes.", "Efficiency aspect focuses on several categories, including robustness to manipulation and unexpected input BIBREF55 . Another study tries to asses the chatbots' ability to control damage and inappropriate utterance BIBREF56 .", "Effectiveness aspect covers two different categories, functionality and humanity. In functionality point of view, a study by BIBREF57 propose to asses how a chatbot can interpret command accurately and provide its status report. Other functionalities, such as chatbots' ability to execute the task as requested, the output linguistic accuracy, and ease of use suggested being assessed BIBREF58 . Meanwhile, from the human aspect, most of the studies suggest each conversational machine should pass Turing test BIBREF21 . Other prominent abilities that chatbot needs to be mastered can respond to specific questions and able to maintain themed discussion.", "Satisfaction aspect has three categories, including affect, ethics and behaviour, and accessibility. Affect is the most suitable assessment categories for EAC. This category asses several quality aspects such as, chatbots' ability to convey personality, give conversational cues, provide emotional information through tone, inflexion, and expressivity, entertain and/or enable the participant to enjoy the interaction and also read and respond to moods of human participant BIBREF59 . Ethic and behaviour category focuses on how a chatbot can protect and respect privacy BIBREF57 . Other quality aspects, including sensitivity to safety and social concerns and trustworthiness BIBREF60 . The last categories are accessibility, which the main quality aspect focus to assess the chatbot ability to detect meaning or intent and, also responds to social cues ." ], [ "In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment.", "This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . Meanwhile, emotion is defined as whether the emotion expression contained in the response agrees with the given gold emotion category. Similarly, BIBREF28 used four annotators to score the response based on consistency, logic and emotion. Consistency measures the fluency and grammatical aspect of the response. Logic measures the degree whether the post and response logically match. Emotion measures the response, whether it contains the appropriate emotion. All of these aspects were measured by three scales 0, 1, and 2. Meanwhile, BIBREF39 proposed naturalness and emotion impact as criteria to evaluate the chatbots' response. Naturalness evaluates whether the response is intelligible, logically follows the context of the conversation, and acceptable as a human response, while emotion impact measures whether the response elicits a positive emotional or triggers an emotionally-positive dialogue, since their study focus only on positive emotion. Another study by BIBREF14 uses crowdsourcing to gather human judgement based on three aspects of performance including empathy/sympathy - did the responses show understanding of the feelings of the person talking about their experience?; relevance - did the responses seem appropriate to the conversation? Were they on-topic?; and fluency - could you understand the responses? Did the language seem accurate?. All of these aspects recorded with three different response, i.e., (1: not at all, 3: somewhat, 5: very much) from around 100 different annotators. After getting all of the human judgement with different criteria, some of these studies used a t-test to get the statistical significance BIBREF28 , BIBREF39 , while some other used inter-annotator agreement measurement such as Fleiss Kappa BIBREF15 , BIBREF14 . Based on these evaluations, they can compare their system performance with baseline or any other state of the art systems." ], [ "There is some work which tried to provide a full story of chatbot development both in industries and academic environment. However, as far as my knowledge, there is still no study focused on summarizing the development of chatbot which taking into account the emotional aspect, that getting more attention in recent years. BIBREF63 provides a long history of chatbots technology development. They also described several uses of chatbots' in some practical domains such as tools entertainment, tools to learn and practice language, information retrieval tools, and assistance for e-commerce of other business activities. Then, BIBREF64 reviewed the development of chatbots from rudimentary model to more advanced intelligent system. They summarized several techniques used to develop chatbots from early development until recent years. Recently, BIBREF65 provide a more systematic shape to review some previous works on chatbots' development. They classify chatbots into two main categories based on goals, including task-oriented chatbots and non-task oriented chatbot. They also classify chatbot based on its development technique into three main categories, including rule-based, retrieval-based, and generative-based approach. Furthermore, they also summarized the detailed technique on these three main approaches." ], [ "In this work, a systematic review of emotionally-aware chatbots is proposed. We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance. The rise of EAC was started by Parry, which uses a simple rule-based approach. Now, most of EAC are built by using a neural-based approach, by exploiting emotion classifier to detect emotion contained in the text. In the modern era, the development of EAC gains more attention since Emotion Generation Challenge shared task on NLPCC 2017. In this era, most EAC is developed by adopting encoder-decoder architecture with sequence-to-sequence learning. Some variant of the recurrent neural network is used in the learning process, including long-short-term memory (LSTM) and gated recurrent unit (GRU). There are also some datasets available for developing EAC now. However, the datasets are only available in English and Chinese. These datasets are gathered from various sources, including social media, online website and manual construction by crowdsourcing. Overall, the difference between these datasets and the common datasets for building chatbot is the presence of an emotion label. In addition, we also investigate the available affective resources which usually use in the emotion classification task. In this part, we only focus on English resources and found several resources from the old one such as LIWC and Emolex to the new one, including DepecheMood and EmoWordNet. In the final part, we gather information about how to evaluate the performance of EAC, and we can classify the approach into two techniques, including qualitative and quantitative assessment. For qualitative assessment, most studies used ISO 9241, which covers several aspects such as efficiency, effectiveness, and satisfaction. While in quantitative analysis, two techniques can be used, including automatic evaluation (by using perplexity) and manual evaluation (involving human judgement). Overall, we can see that effort to humanize chatbots by incorporation affective aspect is becoming the hot topic now. We also predict that this development will continue by going into multilingual perspective since up to now every chatbot only focusing on one language. Also, we think that in the future the studies of humanizing chatbot are not only utilized emotion information but will also focus on a contextual-aware chatbot." ] ], "section_name": [ "Introduction", "History of Emotionally-Aware Chatbot", "Building Emotionally-Aware Chatbot (EAC)", "Resource for Building EAC", "Evaluating EAC", "Qualitative Assessment", "Quantitative Assessment", "Related Work", "Discussion and Conclusion" ] }
{ "answers": [ { "annotation_id": [ "62311b8985db0206af5e9c477b8860c910940c0a" ], "answer": [ { "evidence": [ "We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number.", "Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals. Here we will explain every focus based on several categories and quality attributes.", "In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment.", "This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . Meanwhile, emotion is defined as whether the emotion expression contained in the response agrees with the given gold emotion category. Similarly, BIBREF28 used four annotators to score the response based on consistency, logic and emotion. Consistency measures the fluency and grammatical aspect of the response. Logic measures the degree whether the post and response logically match. Emotion measures the response, whether it contains the appropriate emotion. All of these aspects were measured by three scales 0, 1, and 2. Meanwhile, BIBREF39 proposed naturalness and emotion impact as criteria to evaluate the chatbots' response. Naturalness evaluates whether the response is intelligible, logically follows the context of the conversation, and acceptable as a human response, while emotion impact measures whether the response elicits a positive emotional or triggers an emotionally-positive dialogue, since their study focus only on positive emotion. Another study by BIBREF14 uses crowdsourcing to gather human judgement based on three aspects of performance including empathy/sympathy - did the responses show understanding of the feelings of the person talking about their experience?; relevance - did the responses seem appropriate to the conversation? Were they on-topic?; and fluency - could you understand the responses? Did the language seem accurate?. All of these aspects recorded with three different response, i.e., (1: not at all, 3: somewhat, 5: very much) from around 100 different annotators. After getting all of the human judgement with different criteria, some of these studies used a t-test to get the statistical significance BIBREF28 , BIBREF39 , while some other used inter-annotator agreement measurement such as Fleiss Kappa BIBREF15 , BIBREF14 . Based on these evaluations, they can compare their system performance with baseline or any other state of the art systems." ], "extractive_spans": [], "free_form_answer": "Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement.", "highlighted_evidence": [ "We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number.", "Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals.", "In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment.", "This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "aa702be2b5251fda18625194e28e02b4cc32f322" ], "answer": [ { "evidence": [ "Nowadays, most of chatbots technologies were built by using neural-based approach. Emotional Chatting Machine (ECM) BIBREF15 was the first works which exploiting deep learning approach in building a large-scale emotionally-aware conversational bot. Then several studies were proposed to deal with this research area by introducing emotion embedding representation BIBREF24 , BIBREF25 , BIBREF26 or modeling as reinforcement learning problem BIBREF27 , BIBREF28 . Most of these studies used encoder-decoder architecture, specifically sequence to sequence (seq2seq) learning. Some works also tried to introduce a new dataset in order to have a better gold standard and improve system performance. BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Their dataset was used to develop a chatbot which captures human emotional states and elicits positive emotion during the conversation." ], "extractive_spans": [ "EMPATHETICDIALOGUES dataset", "a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries", "SEMAINE corpus BIBREF30" ], "free_form_answer": "", "highlighted_evidence": [ "BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "50460eeeae5964eb5aedbe9534422cab6e92ba3f" ], "answer": [ { "evidence": [ "In this work, a systematic review of emotionally-aware chatbots is proposed. We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance. The rise of EAC was started by Parry, which uses a simple rule-based approach. Now, most of EAC are built by using a neural-based approach, by exploiting emotion classifier to detect emotion contained in the text. In the modern era, the development of EAC gains more attention since Emotion Generation Challenge shared task on NLPCC 2017. In this era, most EAC is developed by adopting encoder-decoder architecture with sequence-to-sequence learning. Some variant of the recurrent neural network is used in the learning process, including long-short-term memory (LSTM) and gated recurrent unit (GRU). There are also some datasets available for developing EAC now. However, the datasets are only available in English and Chinese. These datasets are gathered from various sources, including social media, online website and manual construction by crowdsourcing. Overall, the difference between these datasets and the common datasets for building chatbot is the presence of an emotion label. In addition, we also investigate the available affective resources which usually use in the emotion classification task. In this part, we only focus on English resources and found several resources from the old one such as LIWC and Emolex to the new one, including DepecheMood and EmoWordNet. In the final part, we gather information about how to evaluate the performance of EAC, and we can classify the approach into two techniques, including qualitative and quantitative assessment. For qualitative assessment, most studies used ISO 9241, which covers several aspects such as efficiency, effectiveness, and satisfaction. While in quantitative analysis, two techniques can be used, including automatic evaluation (by using perplexity) and manual evaluation (involving human judgement). Overall, we can see that effort to humanize chatbots by incorporation affective aspect is becoming the hot topic now. We also predict that this development will continue by going into multilingual perspective since up to now every chatbot only focusing on one language. Also, we think that in the future the studies of humanizing chatbot are not only utilized emotion information but will also focus on a contextual-aware chatbot." ], "extractive_spans": [ "how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance" ], "free_form_answer": "", "highlighted_evidence": [ "We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "How are EAC evaluated?", "What are the currently available datasets for EAC?", "What are the research questions posed in the paper regarding EAC studies?" ], "question_id": [ "b5a2b03cfc5a64ad4542773d38372fffc6d3eac7", "b093b440ae3cd03555237791550f3224d159d85b", "ad16c8261c3a0b88c685907387e1a6904eb15066" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Summarization of the Proposed Approaches for Emotionally-Aware Chatbot Development.", "Table 2: Summarization of dataset available for emotionally-aware chatbot.", "Table 3: Summarization of the available affective resources for emotion classification task." ], "file": [ "3-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png" ] }
[ "How are EAC evaluated?" ]
[ [ "1906.09774-Quantitative Assessment-1", "1906.09774-Quantitative Assessment-0", "1906.09774-Qualitative Assessment-0", "1906.09774-Evaluating EAC-0" ] ]
[ "Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement." ]
573
1810.03459
Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling
Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multi-lingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.
{ "paragraphs": [ [ "The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as acoustic model, alignment model and language model into a single framework. The recent ASR advancements in connectionist temporal classification (CTC) BIBREF5 , BIBREF4 and attention BIBREF3 , BIBREF6 based approaches has created larger interest in speech community to use seq2seq models. To leverage performance gains from this model as similar or better to conventional hybrid RNN/DNN-HMM models requires a huge amount of data BIBREF7 . Intuitively, this is due to the wide-range role of the model in performing alignment and language modeling along with acoustic to character label mapping at each iteration.", "In this paper, we explore the multilingual training approaches BIBREF8 , BIBREF9 , BIBREF10 used in hybrid DNN/RNN-HMMs to incorporate them into the seq2seq models. In a context of applications of multilingual approaches towards seq2seq model, CTC is mainly used instead of the attention models. A multilingual CTC is proposed in BIBREF11 , which uses a universal phoneset, FST decoder and language model. The authors also use linear hidden unit contribution (LHUC) BIBREF12 technique to rescale the hidden unit outputs for each language as a way to adapt to a particular language. Another work BIBREF13 on multilingual CTC shows the importance of language adaptive vectors as auxiliary input to the encoder in multilingual CTC model. The decoder used here is a simple INLINEFORM0 decoder. An extensive analysis on multilingual CTC mainly focusing on improving under limited data condition is performed in BIBREF14 . Here, the authors use a word level FST decoder integrated with CTC during decoding.", "On a similar front, attention models are explored within a multilingual setup in BIBREF15 , BIBREF16 based on attention-based seq2seq to build a model from multiple languages. The data is just combined together assuming the target languages are seen during the training. And, hence no special transfer learning techniques were used here to address the unseen languages during training. The main motivation and contribution behind this work is as follows:" ], [ "In this work, we use the attention based approach BIBREF1 as it provides an effective methodology to perform sequence-to-sequence (seq2seq) training. Considering the limitations of attention in performing monotonic alignment BIBREF18 , BIBREF19 , we choose to use CTC loss function to aid the attention mechanism in both training and decoding. The basic network architecture is shown in Fig. FIGREF7 .", "Let INLINEFORM0 be a INLINEFORM1 -length speech feature sequence and INLINEFORM2 be a INLINEFORM3 -length grapheme sequence. A multi-objective learning framework INLINEFORM4 proposed in BIBREF17 is used in this work to unify attention loss INLINEFORM5 and CTC loss INLINEFORM6 with a linear interpolation weight INLINEFORM7 , as follows: DISPLAYFORM0 ", "The unified model allows to obtain both monotonicity and effective sequence level training.", " INLINEFORM0 represents the posterior probability of character label sequence INLINEFORM1 w.r.t input sequence INLINEFORM2 based on the attention approach, which is decomposed with the probabilistic chain rule, as follows: DISPLAYFORM0 ", "where INLINEFORM0 denotes the ground truth history. Detailed explanations about the attention mechanism is described later.", "Similarly, INLINEFORM0 represents the posterior probability based on the CTC approach. DISPLAYFORM0 ", "where INLINEFORM0 is a CTC state sequence composed of the original grapheme set and the additional blank symbol. INLINEFORM1 is a set of all possible sequences given the character sequence INLINEFORM2 .", "The following paragraphs explain the encoder, attention decoder, CTC, and joint decoding used in our approach." ], [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation.", "80 dimensional Mel-filterbank (fbank) features are then extracted from the speech samples using a sliding window of size 25 ms with 10ms stride. KALDI toolkit BIBREF24 is used to perform the feature processing. The fbank features are then fed to a seq2seq model with the following configuration:", "The Bi-RNN BIBREF25 models mentioned above uses a LSTM BIBREF26 cell followed by a projection layer (BLSTMP). In our experiments below, we use only a character-level seq2seq model trained by CTC and attention decoder. Thus in the following experiments we intend to use character error rate (% CER) as a suitable measure to analyze the model performance. However, in section SECREF26 we integrate a character-level RNNLM BIBREF27 with seq2seq model externally and showcase the performance in terms of word error rate (% WER). In this case the words are obtained by concatenating the characters and the space together for scoring with reference words. All experiments are implemented in ESPnet, end-to-end speech processing toolkit BIBREF28 ." ], [ "Multilingual approaches used in hybrid RNN/DNN-HMM systems BIBREF10 have been used for for tackling the problem of low-resource data condition. Some of these approaches include language adaptive training and shared layer retraining BIBREF29 . Among them, the most benefited method is the parameter sharing technique BIBREF10 . To incorporate the former approach into encoder, CTC and attention decoder model, we performed the following experiments:" ], [ "In this approach, the model is first trained with 10 multiple languages as denoted in table TABREF14 approximating to 600 hours of training data. data from all languages available during training is used to build a single seq2seq model. The model is trained with a character label set composed of characters from all languages including both train and target set as mentioned in table TABREF14 . The model provides better generalization across languages. Languages with limited data when trained with other languages allows them to be robust and helps in improving the recognition performance. In spite of being simple, the model has limitations in keeping the target language data unseen during training.", "Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below." ], [ "To alleviate the limitation in the previous approach, the final layer of the seq2seq model which is mainly responsible for classification is retrained to the target language.", "In previous works BIBREF10 , BIBREF29 related to hybrid DNN/RNN models and CTC based models BIBREF11 , BIBREF14 the softmax layer is only adapted. However in our case, the attention decoder and CTC decoder both have to be retrained to the target language. This means the CTC and attention layers are only updated for gradients during this stage. We found using SGD optimizer with initial learning rate of INLINEFORM0 works better for retraining compared to AdaDelta.", "The learning rate is decayed in this training at a factor of INLINEFORM0 if there is a drop in validation accuracy. Table TABREF20 shows the performance of simply retraining the last layer using a single target language Assamese." ], [ "Based on the observations from stage-1 model in section SECREF22 , we found that simply retraining the decoder towards a target language resulted in degrading %CER the performance from 45.6 to 61.3. This is mainly due to the difference in distribution across encoder and decoder. So, to alleviate this difference the encoder and decoder is once again retrained or fine-tuned using the model from stage-1. The optimizer used here is SGD as in stage-1, but the initial learning rate is kept to INLINEFORM0 and decayed based on validation performance. The resulting model gave an absolute gain of 1.6% when finetuned a multilingual model after 4th epoch. Also, finetuning a model after 15th epoch gave an absolute gain of 4.3%.", "To further investigate the performance of this approach across different target data sizes, we split the train set into INLINEFORM0 5 hours, INLINEFORM1 10 hours, INLINEFORM2 20 hours and INLINEFORM3 full set. Since, in this approach the model is only finetuned by initializing from stage-1 model, the model architecture is fixed for all data sizes. Figure FIGREF23 shows the effectiveness of finetuning both encoder and decoder. The gains from 5 to 10 hours was more compared to 20 hours to full set.", "Table TABREF25 tabulates the % CER obtained by retraining the stage-1 model with INLINEFORM0 full set of target language data. An absolute gain is observed using stage-2 retraining across all languages compared to monolingual model." ], [ "In an ASR system, a language model (LM) takes an important role by incorporating external knowledge into the system. Conventional ASR systems combine an LM with an acoustic model by FST giving a huge performance gain. This trend is also shown in general including hybrid ASR systems and neural network-based sequence-to-sequence ASR systems.", "The following experiments show a benefit of using a language model in decoding with the previous stage-2 transferred models. Although the performance gains in %CER are also generally observed over all target languages, the improvement in %WER was more distinctive. The results shown in the following Fig. FIGREF27 are in %WER. “whole” in each figure means we used all the available data for the target language as full set explained before.", "", "", "We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes.", "As explained already, language models were trained separately and used to decode jointly with seq2seq models. The intuition behind it is to use the separately trained language model as a complementary component that works with a implicit language model within a seq2seq decoder. The way of RNNLM assisting decoding follows the equation below: DISPLAYFORM0 ", " INLINEFORM0 is a scaling factor that combines the scores from a joint decoding eq.( EQREF13 ) with RNN-LM, denoted as INLINEFORM1 . This approach is called shallow fusion.", "Our experiments for target languages show that the gains from adding RNNLM are consistent regardless of the amount of data used for transfer learning. In other words, in Figure FIGREF27 , the gap between two lines are almost consistent over all languages.", "Also, we observe the gain we get by adding RNN-LM in decoding is large. For example, in the case of assamese, the gain by RNN-LM in decoding with a model retrained on 5 hours of the target language data is almost comparable with the model stage-2 retrained with 20 hours of target language data. On average, absolute gain INLINEFORM0 6% is obtained across all target languages as noted in table TABREF28 ." ], [ "In this work, we have shown the importance of transfer learning approach such as stage-2 multilingual retraining in a seq2seq model setting. Also, careful selection of train and target languages from BABEL provide a wide variety in recognition performance (%CER) and helps in understanding the efficacy of seq2seq model. The experiments using character-based RNNLM showed the importance of language model in boosting recognition performance (%WER) over all different hours of target data available for transfer learning.", "Table TABREF25 and TABREF28 summarizes, the effect of these techniques in terms of %CER and %WER. These methods also show their flexibility in incorporating it in attention and CTC based seq2seq model without compromising loss in performance." ], [ "We could use better architectures such as VGG-BLSTM as a multilingual prior model before transferring them to a new target language by performing stage-2 retraining. The naive multilingual approach can be improved by including language vectors as input or target during training to reduce the confusions. Also, investigation of multilingual bottleneck features BIBREF30 for seq2seq model can provide better performance. Apart from using the character level language model as in this work, a word level RNNLM can be connected during decoding to further improve %WER. The attention based decoder can be aided with the help of RNNLM using cold fusion approach during training to attain a better-trained model. In near future, we will incorporate all the above techniques to get comparable performance with the state-of-the-art hybrid DNN/RNN-HMM systems." ] ], "section_name": [ "Introduction", "Sequence-to-Sequence Model", "Data details and experimental setup", "Multilingual experiments", "Stage 0 - Naive approach", "Stage 1 - Retraining decoder only", "Stage 2 - Finetuning both encoder and decoder", "Multilingual RNNLM", "Conclusion", "Future work" ] }
{ "answers": [ { "annotation_id": [ "84557b762ca8e23100b16065c4a0968337b65221" ], "answer": [ { "evidence": [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation." ], "extractive_spans": [ " BABEL speech corpus " ], "free_form_answer": "", "highlighted_evidence": [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "bcc58d28e8b4a97d534cd089ecb5fdc3af7254b1" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "a150d22edd655723d0c8bb780467c01ba522b3ef" ], "answer": [ { "evidence": [ "In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation.", "FLOAT SELECTED: Table 1: Details of the BABEL data used for performing the multilingual experiments" ], "extractive_spans": [], "free_form_answer": "Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages.", "highlighted_evidence": [ "Table TABREF14 presents the details of the languages used in this work for training and evaluation.", "FLOAT SELECTED: Table 1: Details of the BABEL data used for performing the multilingual experiments" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "536bde96e2614e35b00f7ac9895813df59846366" ], "answer": [ { "evidence": [ "Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below.", "We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes." ], "extractive_spans": [ "VGG-BLSTM", "character-level RNNLM" ], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance.", "We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What data do they train the language models on?", "Do they report BLEU scores?", "What languages do they use?", "What architectures are explored to improve the seq2seq model?" ], "question_id": [ "fb56743e942883d7e74a73c70bd11016acddc348", "093dd1e403eac146bcd19b51a2ace316b36c6264", "1adbdb5f08d67d8b05328ccc86d297ac01bf076c", "da82b6dad2edd4911db1dc59e4ccd7f66c5fd79c" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Fig. 1: Hybrid attention/CTC network with LM extension: the shared encoder is trained by both CTC and attention model objectives simultaneously. The joint decoder predicts an output label sequence by the CTC, attention decoder and RNN-LM.", "Table 1: Details of the BABEL data used for performing the multilingual experiments", "Table 2: Experiment details", "Table 4: Comparison of naive approach and training only the last layer performed using the Assamese language", "Table 3: Recognition performance of naive multilingual approach for eval set of 10 BABEL training languages trained with the train set of same languages", "Table 5: Stage-2 retraining across all languages with full set of target language data", "Fig. 2: Difference in performance for 5 hours, 10 hours, 20 hours and full set of target language data used to retrain a multilingual model from stage-1", "Table 6: Recognition performance in %WER using stage-2 retraining and multilingual RNNLM", "Fig. 3: Recognition performance after integrating RNNLM during decoding in %WER for different amounts of target data" ], "file": [ "4-Figure1-1.png", "5-Table1-1.png", "5-Table2-1.png", "5-Table4-1.png", "6-Table3-1.png", "6-Table5-1.png", "6-Figure2-1.png", "7-Table6-1.png", "7-Figure3-1.png" ] }
[ "What languages do they use?" ]
[ [ "1810.03459-5-Table1-1.png", "1810.03459-Data details and experimental setup-0" ] ]
[ "Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages." ]
579
1910.06061
Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels
In low-resource settings, the performance of supervised labeling models can be improved with automatically annotated or distantly supervised data, which is cheap to create but often noisy. Previous works have shown that significant improvements can be reached by injecting information about the confusion between clean and noisy labels in this additional training data into the classifier training. However, for noise estimation, these approaches either do not take the input features (in our case word embeddings) into account, or they need to learn the noise modeling from scratch which can be difficult in a low-resource setting. We propose to cluster the training data using the input features and then compute different confusion matrices for each cluster. To the best of our knowledge, our approach is the first to leverage feature-dependent noise modeling with pre-initialized confusion matrices. We evaluate on low-resource named entity recognition settings in several languages, showing that our methods improve upon other confusion-matrix based methods by up to 9%.
{ "paragraphs": [ [ "Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types beyond person, location and organization. Distantly- or weakly-supervised approaches have been proposed to solve this issue, e.g., by using lists of entities for labeling raw text BIBREF0, BIBREF1. This allows obtaining large amounts of training data quickly and cheaply. Unfortunately, these labels often contain errors and learning with this noisily-labeled data is difficult and can even reduce overall performance (see, e.g. BIBREF2).", "A variety of ideas have been proposed to overcome the issues of noisy training data. One popular approach is to estimate the relation between noisy and clean, gold-standard labels and use this noise model to improve the training procedure. However, most of these approaches only assume a dependency between the labels and do not take the features into account when modeling the label noise. This may disregard important information. The global confusion matrix BIBREF3 is a simple model which assumes that the errors in the noisy labels just depend on the clean labels.", "Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines." ], [ "A popular approach is modeling the relationship between noisy and clean labels, i.e., estimating $p(\\hat{y}|y)$ where $y$ is the clean and $\\hat{y}$ the noisy label. For example, this can be represented as a noise or confusion matrix between the clean and the noisy labels, as explained in Section SECREF3. Having its roots in statistics BIBREF4, this or similar ideas have been recently studied in NLP BIBREF2, BIBREF3, BIBREF5, image classification BIBREF6, BIBREF7, BIBREF8 and general machine learning settings BIBREF9, BIBREF10, BIBREF11. All of these methods, however, do not take the features into account that are used to represent the instances during classification. In BIBREF12 only the noise type depends on $x$ but not the actual noise model. BIBREF13 and BIBREF14 use the learned feature representation $h$ to model $p(\\hat{y}|y,h(x))$ for image classification and relation extraction respectively. In the work of BIBREF15, $p(y|\\hat{y},h(x))$ is estimated to clean the labels for an image classification task. The survey by BIBREF16 gives a detailed overview about other techniques for learning in the presence of noisy labels.", "Specific to learning noisy sequence labels in NLP, BIBREF2 used a combination of clean and noisy data for low-resource POS tagging. BIBREF17 suggested partial annotation learning to lessen the effects of incomplete annotations and reinforcement learning for filtering incorrect labels for Chinese NER. BIBREF3 used a confusion matrix and proposed to leverage pairs of clean and noisy labels for its initialization, evaluating on English NER. For English NER and Chunking, BIBREF5 also used a confusion matrix but learned it with an EM approach and combined it with multi-task learning. Recently, BIBREF18 studied input from different, unreliable sources and how to combine them for NER prediction." ], [ "We assume a low-resource setting with a small set of gold standard annotated data $C$ consisting of instances with features $x$ and corresponding, clean labels $y$. Additionally, a large set of noisy instances $(x,\\hat{y}) \\in N$ is available. This can be obtained e.g. from weak or distant supervision. In a multi-class classification setting, we can learn the probability of a label $y$ having a specific class given the feature $x$ as", "where $k$ is the number of classes, $h$ is a learned, non-linear function (in our case a neural network) and $u$ is the softmax weights. This is our base model trained on $C$. Due to the errors in the labels, the clean and noisy labels have different distributions. Therefore, learning on $C$ and $N$ jointly can be detrimental for the performance of predicting unseen, clean instances. Nevertheless, the noisy-labeled data is still related to $C$ and can contain useful information that we want to successfully leverage. We transform the predicted (clean) distribution of the base model to the noisy label distribution", "The relationship is modeled using a confusion matrix (also called noise or transformation matrix or noise layer) with learned weights $b_{ij}$:", "The overall architecture is visualized in Figure FIGREF4. An important question is how to initialize this noise layer. As proposed by BIBREF3, we apply the same distant supervision technique used to obtain $N$ from unlabeled data on the already labeled instances in $C$. We thus obtain pairs of clean $y$ and corresponding noisy labels $\\hat{y}$ for the same instances and the weights of the noise layer can be initialized as", "Following the naming by BIBREF14, we call this the global noise model." ], [ "The global confusion matrix is a simple model which assumes that the errors in the noisy labels depend on the clean labels. An approach that also takes the corresponding features $x$ into account can model more complex relations. BIBREF15 and BIBREF14 use multiple layers of a neural network to model these relationships. However, in low resource settings with only small amounts of clean, supervised data, these more complex models can be difficult to learn. In contrast to that, larger amounts of unlabeled text are usually available even in low-resource settings. Therefore, we propose to use unsupervised clustering techniques to partition the feature space of the input words (and the corresponding instances) before estimating the noise matrices. To create the clusters, we use either Brown clustering BIBREF19 on the input words or $k$-means clustering BIBREF20 on the pretrained word embeddings after applying PCA BIBREF21.", "In sequence labeling tasks, the features $x$ of an instance usually consist of the input word $\\iota (x)$ and its context. Given a clustering $\\Pi $ over the input words $\\lbrace \\iota (x) \\mid (x,y) \\in C \\cup N \\rbrace $ consisting of clusters $\\Pi _1, ..., \\Pi _p$, we can group all clean and noisy instances into groups", "For each group, we construct an independent confusion matrix using Formulas DISPLAY_FORM3 and DISPLAY_FORM5. The prediction of the noisy label $\\hat{y}$ (Formula DISPLAY_FORM2) then becomes", "Since the clustering is performed on unsupervised data, in low-resource settings, the size of an actual group of instances $G_q$ can be very small. If the number of members in a group is insufficient, the estimation of reliable noise matrices is difficult. This issue can be avoided by only using the largest groups and creating a separate group for all other instances. To make use of all the clusters, we alternatively propose to interpolate between the global and the group confusion matrix:", "The interpolation hyperparameter $\\lambda $ (with $0 \\le \\lambda \\le 1$) regulates the influence from the global matrix on the interpolated matrix. The selection of the largest groups and the interpolation can also be combined." ], [ "We evaluate all models in five low-resource NER settings across different languages. Although the evaluation is performed for NER labeling, the proposed models are not restricted to the task of NER and can potentially be used for other tasks." ], [ "We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence problems for increasing cluster numbers. The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training.", "The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\\lambda \\in \\lbrace 0.3, 0.5, 0.7\\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set.", "We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss.", "The training for all models was performed with labels in the IO format. The predicted labels for the test data were converted and evaluated in IOB2 with the official CoNLL evaluation script. The IOB2 format would increase matrix size making the confusion matrix estimation more difficult without adding much information in practice. In preliminary experiments, this decreased performance in particular for low-resource settings." ], [ "The models were tested on the four CoNLL datasets for English, German, Spanish and Dutch BIBREF23, BIBREF24 using the standard split, and the Estonian data from BIBREF25 using a 10/10/80 split for dev/test/train sets. For each language, the labels of 1% of the training data (ca. 2100 instances) were used to obtain a low-resource setting. We treat this as the clean data $C$. The rest of the (now unlabeled) training data was used for the automatic annotation which we treat as noisily labeled data $N$. We applied the distant supervision method by BIBREF1, which uses lists and gazetteer information for NER labeling. As seen in Table TABREF19, this method reaches rather high precision but has a poor recall. The development set of the original dataset is used for model-epoch and hyperparameter selection, and the results are reported on the complete, clean test set. The words were embedded with the pretrained fastText vectors BIBREF26. The clusters were calculated on the unlabeled version of the full training data. Additionally, the Brown clusters used the language-specific documents from the Europarl corpus BIBREF27." ], [ "The results of all models are shown in Table TABREF9. The newly proposed cluster-based models achieve the best performance across all languages and outperform all other models in particular for Dutch and English. The combination of interpolation with the global matrix and the selection of large clusters is almost always beneficial compared to the cluster-based models using only one of the methods. In general, both clustering methods achieve similar performance in combination with interpolation and selection, except for English, where Brown clustering performs worse than $k$-Means clustering. While the Brown clustering was trained on the relatively small Europarl corpus, $k$-Means clustering seems to benefit from the word embeddings trained on documents from the much larger common crawl." ], [ "In the majority of cases, a cluster size of 10 or 25 was selected on the development set during the hyperparameter search. Increasing the number of clusters introduces smaller clusters for which it is difficult to estimate the noise matrix, due to the limited training resources. On the other hand, decreasing the number of clusters can generalize too much, resulting in loss of information on the noise distribution. For the $\\lambda $ parameter, a value of either 0.3 or 0.5 was chosen on the development set giving the group clusters more or equal weight compared to the global confusion matrix. This shows that the feature dependent noise matrices are important and have a positive impact on performance.", "Five confusion matrices for groups and the global matrix in the English data are shown as examples in Figure FIGREF10. One can see that the noise matrix can visibly differ depending on the cluster of the input word. Some of these differences can also be directly explained by flaws in the distant supervision method. The automatic annotation did not label any locations written in all upper-case letters as locations. Therefore, the noise distribution for all upper-cased locations differs from the distribution of other location names (cf. FIGREF10 and FIGREF10). The words April and June are used both as names for a month and as first names in English. This results in a very specific noise distribution with many temporal expressions being annotated as person entities (cf. FIGREF10). Similar to this, first-person names and also Asian words are likely to be labeled as persons by the automatic annotation method (cf. FIGREF10 and FIGREF10).", "All of these groups show traits that are not displayed in the global matrix, allowing the cluster-based models to outperform the other systems." ], [ "We have shown that the noise models with feature-dependent confusion matrices can be used effectively in practice. These models improve low-resource named entity recognition with noisy labels beyond all other tested baselines. Further, the feature-dependent confusion matrices are task-independent and could be used for other NLP tasks, which is one possible direction of future research." ], [ "The authors would like to thank Heike Adel, Annemarie Friedrich and the anonymous reviewers for their helpful comments. This work has been partially funded by Deutsche Forschungsgemeinschaft (DFG) under grant SFB 1102: Information Density and Linguistic Encoding." ] ], "section_name": [ "Introduction", "Related Work", "Global Noise Model", "Feature Dependent Noise Model", "Experiments", "Experiments ::: Models", "Experiments ::: Data", "Experimental Results", "Analysis", "Conclusions", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "53a7f2513d6f0a4bb020aa66ae6c6d0d7e06154c" ], "answer": [ { "evidence": [ "We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence problems for increasing cluster numbers. The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training.", "The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\\lambda \\in \\lbrace 0.3, 0.5, 0.7\\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set.", "We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss." ], "extractive_spans": [ "Base ", "Base+Noise", "Cleaning ", "Dynamic-CM ", " Global-CM", " Global-ID-CM", "Brown-CM ", " K-Means-CM" ], "free_form_answer": "", "highlighted_evidence": [ " The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training.\n\nThe cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\\lambda \\in \\lbrace 0.3, 0.5, 0.7\\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set.\n\nWe implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "d2f7f48850f258e564972200001d9761b1934d8a" ], "answer": [ { "evidence": [ "Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "e7efc035fced7d0e572eeffb8e82825821525c26" ], "answer": [ { "evidence": [ "Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines." ], "extractive_spans": [], "free_form_answer": "They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise", "highlighted_evidence": [ "We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "two", "two", "two" ], "paper_read": [ "no", "no", "no" ], "question": [ "What is baseline used?", "Did they evaluate against baseline?", "How they evaluate their approach?" ], "question_id": [ "439af1232a012fc4d94ef2ffe305dd405bee3888", "b6a6bdca6dee70f8fe6dd1cfe3bb2c5ff03b1605", "8951fde01b1643fcb4b91e51f84e074ce3b69743" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Visualization of the noisy labels, confusion matrix architecture. The dotted line shows the proposed new dependency.", "Figure 2: Confusion matrices used for initialization when training with the English dataset. The global matrix is given as well as five of the feature-dependent matrices obtained when using k-Means clustering for 75 clusters.", "Table 1: Results of the evaluation in low-resource settings with 1% of the original labeled training data averaged over six runs. We report the F1 scores (higher is better) on the complete test set, as well as the standard error.", "Table 2: Results of the automatic labeling method proposed by Dembowski et al. (2017) on the test data." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png" ] }
[ "How they evaluate their approach?" ]
[ [ "1910.06061-Introduction-2" ] ]
[ "They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise" ]
581
2002.10361
Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition
Existing research on fairness evaluation of document classification models mainly uses synthetic monolingual data without ground truth for author demographic attributes. In this work, we assemble and publish a multilingual Twitter corpus for the task of hate speech detection with inferred four author demographic factors: age, country, gender and race/ethnicity. The corpus covers five languages: English, Italian, Polish, Portuguese and Spanish. We evaluate the inferred demographic labels with a crowdsourcing platform, Figure Eight. To examine factors that can cause biases, we take an empirical analysis of demographic predictability on the English corpus. We measure the performance of four popular document classifiers and evaluate the fairness and bias of the baseline classifiers on the author-level demographic attributes.
{ "paragraphs": [ [ "While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware document classifiers is to train and build non-discriminatory models towards people no matter what their demographic attributes are, such as gender and ethnicity. Existing research BIBREF0, BIBREF3, BIBREF4, BIBREF5, BIBREF1 in evaluating fairness of document classifiers focus on the group fairness BIBREF6, which refers to every demographic group has equal probability of being assigned to the positive predicted document category.", "However, the lack of original author demographic attributes and multilingual corpora bring challenges towards the fairness evaluation of document classifiers. First, the datasets commonly used to build and evaluate the fairness of document classifiers obtain derived synthetic author demographic attributes instead of the original author information. The common data sources either derive from Wikipedia toxic comments BIBREF0, BIBREF4, BIBREF5 or synthetic document templates BIBREF3, BIBREF4. The Wikipedia Talk corpus BIBREF7 provides demographic information of annotators instead of the authors, Equity Evaluation Corpus BIBREF3 are created by sentence templates and combinations of racial names and gender coreferences. While existing work BIBREF8, BIBREF9 infers user demographic information (white/black, young/old) from the text, such inference is still likely to cause confounding errors that impact and break the independence between demographic factors and the fairness evaluation of text classifiers. Second, existing research in the fairness evaluation mainly focus on only English resources, such as age biases in blog posts BIBREF9, gender biases in Wikipedia comments BIBREF0 and racial biases in hate speech detection BIBREF8. Different languages have shown different patterns of linguistic variations across the demographic attributes BIBREF10, BIBREF11, methods BIBREF12, BIBREF4 to reduce and evaluate the demographic bias in English corpora may not apply to other languages. For example, Spanish has gender-dependent nouns, but this does not exist in English BIBREF2; and Portuguese varies across Brazil and Portugal in both word usage and grammar BIBREF13. The rich variations have not been explored under the fairness evaluation due to lack of multilingual corpora. Additionally, while we have hate speech detection datasets in multiple languages BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, there is still no integrated multilingual corpora that contain author demographic attributes which can be used to measure group fairness. The lack of author demographic attributes and multilingual datasets limits research for evaluating classifier fairness and developing unbiased classifiers.", "In this study, we combine previously published corpora labeled for Twitter hate speech recognition in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17, and publish this multilingual data augmented with author-level demographic information for four attributes: race, gender, age and country. The demographic factors are inferred from user profiles, which are independent from text documents, the tweets. To our best knowledge, this is the first multilingual hate speech corpus annotated with author attributes aiming for fairness evaluation. We start with presenting collection and inference steps of the datasets. Next, we take an exploratory study on the language variations across demographic groups on the English dataset. We then experiment with four multiple classification models to establish baseline levels of this corpus. Finally, we evaluate the fairness performance of those document classifiers." ], [ "We assemble the annotated datasets for hate speech classification. To narrow down the data sources, we limit our dataset sources to the unique online social media site, Twitter. We have requested 16 published Twitter hate speech datasets, and finally obtained 7 of them in five languages. By using the Twitter streaming API, we collected the tweets annotated by hate speech labels and their corresponding user profiles in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17. We binarize all tweets' labels (indicating whether a tweet has indications of hate speech), allowing to merge the different label sets and reduce the data sparsity.", "Whether a tweet is considered hate speech heavily depends on who the speaker is; for example, whether a racial slur is intended as hate speech depends in part on the speaker's race BIBREF14. Therefore, hate speech classifiers may not generalize well across all groups of people, and disparities in the detection offensive speech could lead to bias in content moderation BIBREF21. Our contribution is to further annotate the data with user demographic attributes inferred from their public profiles, thus creating a corpus suitable for evaluating author-level fairness for this hate speech recognition task across multiple languages." ], [ "We consider four user factors of age, race, gender and geographic location. For location, we inference two granularities, country and US region, but only experiment with the country attribute. While the demographic attributes can be inferred through tweets BIBREF22, BIBREF8, we intentionally exclude the contents from the tweets if they infer these user attributes, in order to make the evaluation of fairness more reliable and independent. If users were grouped based on attributes inferred from their text, then any differences in text classification across those groups could be related to the same text. Instead, we infer attributes from public user profile information (i.e., description, name and photo)." ], [ "We infer these attributes from each user's profile image by using Face++ (https://www.faceplusplus.com/), a computer vision API that provides estimates of demographic characteristics. Empirical comparisons of facial recognition APIs have found that Face++ is the most accurate tool on Twitter data BIBREF23 and works comparatively better for darker skins BIBREF24. For the gender, we choose the binary categories (male/female) by the predicted probabilities. We map the racial outputs into four categories: Asian, Black, Latino and White. We only keep users that appear to be at least 13 years old, and we save the first result from the API if multiple faces are identified. We experiment and evaluate with binarization of race and age with roughly balanced distributions (white and nonwhite, $\\le $ median vs. elder age) to consider a simplified setting across different languages, since race is harder to infer accurately." ], [ "The country-level language variations can bring challenges that are worth to explore. We extract geolocation information from users whose profiles contained either numerical location coordinates or a well-formatted (matching a regular expression) location name. We fed the extracted values to the Google Maps API (https://maps.googleapis.com) to obtain structured location information (city, state, country). We first count the main country source and then binarize the country to indicate if a user is in the main country or not. For example, the majority of users in the English are from the United States (US), therefore, we can binarize the country attributes to indicate if the users are in the US or not." ], [ "We show the corpus statistics in Table TABREF8 and summarize the full demographic distributions in Table TABREF9. The binary demographic attributes (age, country, gender, race) can bring several benefits. First, we can create comparatively balanced label distributions. We can observe that there are differences in the race and gender among Italian and Polish data, while other attributes across the other languages show comparably balanced demographic distributions. Second, we can reduce errors inferred from the Face++ on coarse labels. Third, it is more convenient for us to analyze, conduct experiments and evaluate the group fairness of document classifiers.", "Table TABREF8 presents different patterns of the corpus. The Polish data has the smallest users. This is because the data focuses on the people who own the most popular accounts in the Polish data BIBREF16, the other data collected tweets randomly. And the dataset shows a much more sparse distribution of the hate speech label than the other languages.", "Table TABREF9 presents different patterns of the user attributes. English, Portuguese and Spanish users are younger than the Italian and Polish users in the collected data. And both Italian and Polish show more skewed demographic distributions in country, gender and race, while the other datasets show more balanced distributions." ], [ "Image-based approaches will have inaccuracies, as a person's demographic attributes cannot be conclusively determined merely from their appearance. However, given the difficulty in obtaining ground truth values, we argue that automatically inferred attributes can still be informative for studying classifier fairness. If a classifier performs significantly differently across different groups of users, then this shows that the classifier is biased along certain groupings, even if those groupings are not perfectly aligned with the actual attributes they are named after. This subsection tries to quantify how reliably these groupings correspond to the demographic variables.", "Prior research found that Face++ achieves 93.0% and 92.0% accuracy on gender and ethnicity evaluations BIBREF23. We further conduct a small evaluation on the hate speech corpus by a small sample of annotated user profile photos providing a rough estimate of accuracy while acknowledging that our annotations are not ground truth. We obtained the annotations from the crowdsourcing website, Figure Eight (https://figure-eight.com/). We randomly sampled 50 users whose attributes came from Face++ in each language. We anonymize the user profiles and feed the information to the crowdsourcing website. Three annotators annotated each user photo with the binary demographic categories. To select qualified annotators and ensure quality of the evaluations, we set up 5 golden standard annotation questions for each language. The annotators can join the evaluation task only by passing the golden standard questions. We decide demographic attributes by majority votes and present evaluation results in Table TABREF11. Our final evaluations show that overall the Face++ achieves averaged accuracy scores of 82.8%, 88.4% and 94.4% for age, race and gender respectively." ], [ "To facilitate the study of classification fairness, we will publicly distribute this anonymized corpus with the inferred demographic attributes including both original and binarized versions. To preserve user privacy, we will not publicize the personal profile information, including user ids, photos, geocoordinates as well as other user profile information, which were used to infer the demographic attributes. We will, however, provide inferred demographic attributes in their original formats from the Face++ and Google Maps based on per request to allow wider researchers and communities to replicate the methodology and probe more depth of fairness in document classification.", "" ], [ "Demographic factors can improve the performances of document classifiers BIBREF25, and demographic variations root in language, especially in social media data BIBREF26, BIBREF25. For example, language styles are highly correlated with authors' demographic attributes, such as age, race, gender and location BIBREF27, BIBREF28. Research BIBREF29, BIBREF12, BIBREF30 find that biases and stereotypes exist in word embeddings, which is widely used in document classification tasks. For example, “receptionist” is closer to females while “programmer” is closer to males, and “professor” is closer to Asian Americans while “housekeeper” is closer to Hispanic Americans.", "This motivates us to explore and test if the language variations hold in our particular dataset, how strong the effects are. We conduct the empirical analysis of demographic predictability on the English dataset." ], [ "We examine how accurately the documents can predict author demographic attributes from three different levels:", "Word-level. We extract TF-IDF-weighted 1-, 2-grams features.", "POS-level. We use Tweebo parser BIBREF31 to tag and extract POS features. We count the POS tag and then normalize the counts for each document.", "Topic-level. We train a Latent Dirichlet Allocation BIBREF32 model with 20 topics using Gensim BIBREF33 with default parameters. Then a document can be represented as a probabilistic distribution over the 20 topics.", "We shuffle and split data into training (70%) and test (30%) sets. Three logistic classifiers are trained by the three levels of features separately. We measure the prediction accuracy and show the absolute improvements in Figure FIGREF18.", "The improved prediction accuracy scores over majority baselines suggest that language variations across demographic groups are encoded in the text documents. The results show that documents are the most predictable to the age attribute. We can also observe that the word is the most predictable feature to demographic factors, while the POS feature is least predictable towards the country factor. These suggest there might be a connection between language variations and demographic groups. This motivates us to further explore the language variations based on word features. We rank the word features by mutual information classification BIBREF34 and present the top 10 unigram features in Table TABREF14. The qualitative results show the most predictable word features towards the demographic groups and suggest such variations may impact extracted feature representations and further training fair document classifiers.", "The Table TABREF14 shows that when classifying hate speech tweets, the n-words and b-words are more significant correlated with the white instead of the other racial groups. However, this shows an opposite view than the existing work BIBREF8, which presents the two types of words are more significantly correlated with the black. This can highlight the values of our approach that to avoid confounding errors, we obtain author demographic information independently from the user generated documents." ], [ "Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus." ], [ "To anonymize user information, we hash user and tweet ids and then replace hyperlinks, usernames, and hashtags with generic symbols (URL, USER, HASHTAG). Documents are lowercased and tokenized using NLTK BIBREF38. The corpus is randomly split into training (70%), development (15%), and test (15%) sets. We train the models on the training set and find the optimal hyperparameters on the development set before final evaluations on the test set. We randomly shuffle the training data at the beginning of each training epoch." ], [ "We implement and experiment four baseline classification models. To compare fairly, we keep the feature size up to 15K for each classifier across all five languages. We calculate the weight for each document category by $\\frac{N}{N_l}$ BIBREF39, where $N$ is the number of documents in each language and $N_l$ is the number of documents labeled by the category. Particularly, for training BERT model, we append two additional tokens, “[CLS]” and “[SEP]”, at the start and end of each document respectively. For the neural models, we pad each document or drop rest of words up to 40 tokens. We use “unknown” as a replacement for unknown tokens. We initialize CNN and RNN classifiers by pre-trained word embeddings BIBREF40, BIBREF41, BIBREF42, BIBREF43 and train the networks up to 10 epochs." ], [ "We first extract TF-IDF-weighted features of uni-, bi-, and tri-grams on the corpora, using the most frequent 15K features with the minimum feature frequency as 2. We then train a LogisticRegression from scikit-learn BIBREF34. We use “liblinear” as the solver function and leave the other parameters as default." ], [ "We implement the Convolutional Neural Network (CNN) classifier described in BIBREF36, BIBREF44 by Keras BIBREF45. We first apply 100 filters with three different kernel sizes, 3, 4 and 5. After the convolution operations, we feed the concatenated features to a fully connected layer and output document representations with 100 dimensions. We apply “softplus” function with a l2 regularization with $.03$ and a dropout rate with $.3$ in the dense layer. The model feeds the document representation to final prediction. We train the model with batch size 64, set model optimizer as Adam BIBREF46 and calculate loss values by the cross entropy function. We keep all other parameter settings as described in the paper BIBREF36." ], [ "We build a recurrent neural network (RNN) classifier by using bi-directional Gated Recurrent Unit (bi-GRU) BIBREF35, BIBREF4. We set the output dimension of GRU as 200 and apply a dropout on the output with rate $.2$. We optimize the RNN with RMSprop BIBREF47 and use the same loss function and batch size as the CNN model. We leave the other parameters as default in the Keras BIBREF45." ], [ "BERT is a transformer-based pre-trained language model which was well trained on multi-billion sentences publicly available on the web BIBREF37, which can effectively generate the precise text semantics and useful signals. We implement a BERT-based classification model by HuggingFace's Transformers BIBREF48. The model encodes each document into a fixed size (768) of representation and feed to a linear prediction layer. The model is optimized by AdamW with a warmup and learning rate as $.1$ and $2e^{-5}$ respectively. We leave parameters as their default, conduct fine-tuning steps with 4 epochs and set batch size as 32 BIBREF49. The classification model loads “bert-base-uncased” pre-trained BERT model for English and “bert-base-multilingual-uncased” multilingual BERT model BIBREF50 for the other languages. The multilingual BERT model follows the same method of BERT by using Wikipedia text from the top 104 languages. Due to the label imbalance shown in Table TABREF8, we balance training instances by randomly oversampling the minority during the training process." ], [ "To measure overall performance, we evaluate models by four metrics: accuracy (Acc), weighted F1 score (F1-w), macro F1 score (F1-m) and area under the ROC curve (AUC). The F1 score coherently combines both precision and recall by $2*\\frac{precision*recall}{precision+recall}$. We report F1-m considering that the datasets are imbalanced." ], [ "To evaluate group fairness, we measure the equality differences (ED) of true positive/negative and false positive/negative rates for each demographic factor. ED is a standard metric to evaluate fairness and bias of document classifiers BIBREF0, BIBREF4, BIBREF5.", "This metric sums the differences between the rates within specific user groups and the overall rates. Taking the false positive rate (FPR) as an example, we calculate the equality difference by:", ", where $D$ is a demographic factor (e.g., race) and $d$ is a demographic group (e.g., white or nonwhite)." ], [ "We have presented our evaluation results of performance and fairness in Table TABREF20 and Table TABREF29 respectively. Country and race have very skewed distributions in the Italian and Polish corpora, therefore, we omit fairness evaluation on the two factors." ], [ "Table TABREF20 demonstrates the performances of the baseline classifiers for hate speech classification on the corpus we proposed. Results are obtained from the five languages covered in our corpus respectively. Among the four baseline classifiers, LR, CNN and RNN consistently perform well on all languages. Moreover, neural-based models (CNN and RNN) substantially outperform LR on four out of five languages (except Spanish). However, the results obtained by BERT are relatively lower than the other baselines, and show more significant gap in the English dataset. One possible explanation is BERT was pre-trained on Wikipedia documents, which are significantly different from the Twitter corpus in document length, word usage and grammars. For example, each tweet is a short document with 20 tokens, but the BERT is trained on long documents up to 512 tokens. Existing research suggests that fine-tuning on the multilingual corpus can further improve performance of BERT models BIBREF49." ], [ "We have measured the group fairness in Table TABREF29. Generally, the RNN classifier achieves better and more stable performance across major fairness evaluation tasks. By comparing the different baseline classifiers, we can find out that the LR usually show stronger biases than the neural classification models among majority of the tasks. While the BERT classifier performs comparatively lower accuracy and F1 scores, the classifier has less biases on the most of the datasets. However, biases can significantly increases for the Portuguese dataset when the BERT classifier achieves better performance. We examine the relationship by building linear model between two differences: the performance differences between the RNN and other classifiers, the SUM-ED differences between RNN and other classifiers. We find that the classification performance does not have significantly ($p-value > .05$) correlation with fairness and bias. The significant biases of classifiers varies across tasks and languages: the classifiers trained on Polish and Italian are biased the most by Age and Gender, the classifiers trained on Spanish and Portuguese are most biased the most by Country, and the classifiers trained on English tweets are the most unbiased throughout all the attributes. Classifiers usually have very high bias scores on both gender and age in Italian and Polish data. We find that the age and gender both have very skewed distributions in the Italian and Polish datasets. Overall, our baselines provide a promising start for evaluating future new methods of reducing demographic biases for document classification under the multilingual setting." ], [ "In this paper, we propose a new multilingual dataset covering four author demographic annotations (age, gender, race and country) for the hate speech detection task. We show the experimental results of several popular classification models in both overall and fairness performance evaluations. Our empirical exploration indicates that language variations across demographic groups can lead to biased classifiers. This dataset can be used for measuring fairness of document classifiers along author-level attributes and exploring bias factors across multilingual settings and multiple user factors. The proposed framework for inferring the author demographic attributes can be used to generate more large-scale datasets or even applied to other social media sites (e.g., Amazon and Yelp). While we encode the demographic attributes into categories in this work, we will provide inferred probabilities of the demographic attributes from Face++ to allow for broader research exploration. Our code, anonymized data and data statement BIBREF51 will be publicly available at https://github.com/xiaoleihuang/Multilingual_Fairness_LREC." ], [ "While our dataset provides new information on author demographic attributes, and our analysis suggest directions toward reducing bias, a number of limitations must be acknowledged in order to appropriately interpret our findings.", "First, inferring user demographic attributes by profile information can be risky due to the accuracy of the inference toolkit. In this work, we present multiple strategies to reduce the errors bringing by the inference toolkits, such as human evaluation, manually screening and using external public profile information (Instagram). However, we cannot guarantee perfect accuracy of the demographic attributes, and, errors in the attributes may themselves be “unfair” or unevenly distributed due to bias in the inference tools BIBREF24. Still, obtaining individual-level attributes is an important step toward understanding classifier fairness, and our results found biases across these groupings of users, even if some of the groupings contained errors.", "Second, because methods for inferring demographic attributes are not accurate enough to provide fine-grained information, our attribute categories are still too coarse-grained (binary age groups and gender, and only four race categories). Using coarse-grained attributes would hide the identities of specific demographic groups, including other racial minorities and people with non-binary gender. Broadening our analyses and evaluations to include more attribute values may require better methods of user attribute inference or different sources of data.", "Third, language variations across demographic groups might introduce annotation biases. Existing research BIBREF52 shows that annotators are more likely to annotate tweets containing African American English words as hate speech. Additionally, the nationality and educational level might also impact on the quality of annotations BIBREF20. Similarly, different annotation sources of our dataset (which merged two different corpora) might have variations in annotating schema. To reduce annotation biases due to the different annotating schema, we merge the annotations into the two most compatible document categories: normal and hate speech. Annotation biases might still exist, therefore, we will release our original anonymized multilingual dataset for research communities." ], [ "The authors thank the anonymous reviews for their insightful comments and suggestions. This work was supported in part by the National Science Foundation under award number IIS-1657338. This work was also supported in part by a research gift from Adobe." ] ], "section_name": [ "Introduction", "Data", "Data ::: User Attribute Inference", "Data ::: User Attribute Inference ::: Age, Race, Gender.", "Data ::: User Attribute Inference ::: Country.", "Data ::: Corpus Summary", "Data ::: Demographic Inference Accuracy", "Data ::: Privacy Considerations", "Language Variations across Demographic Groups", "Language Variations across Demographic Groups ::: Are Demographic Factors Predictable in Documents?", "Experiments", "Experiments ::: Data Preprocessing", "Experiments ::: Baseline Models", "Experiments ::: Baseline Models ::: LR.", "Experiments ::: Baseline Models ::: CNN.", "Experiments ::: Baseline Models ::: RNN.", "Experiments ::: Baseline Models ::: BERT", "Experiments ::: Evaluation Metrics ::: Performance Evaluation.", "Experiments ::: Evaluation Metrics ::: Fairness Evaluation.", "Results", "Results ::: Overall performance evaluation.", "Results ::: Group fairness evaluation.", "Conclusion", "Conclusion ::: Limitations", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "dff3eab59a33a4e6cab7e948decc6b4f2b5a6569" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ], "extractive_spans": [], "free_form_answer": "It contains 106,350 documents", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "ef321095a397257d78ceb66214a5d73f4cfe9fa1" ], "answer": [ { "evidence": [ "Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus." ], "extractive_spans": [ "logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37" ], "free_form_answer": "", "highlighted_evidence": [ "In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "53ed1a08288b9501bc85357f9c66c9a22320b28d" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ], "extractive_spans": [], "free_form_answer": "over 104k documents", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "two", "", "" ], "paper_read": [ "no", "no", "no" ], "question": [ "How large is the corpus?", "Which document classifiers do they experiment with?", "How large is the dataset?" ], "question_id": [ "38c74ab8292a94fc5a82999400ee9c06be19f791", "ff307b10e56f75de6a32e68e25a69899478a13e4", "16af38f7c4774637cf8e04d4b239d6d72f0b0a3a" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "Italian", "", "" ], "topic_background": [ "unfamiliar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech).", "Table 2: Statistical summary of user attributes in age, country, gender and race. For the age, we present both mean and median values in case of outliers. For the other attributes, we show binary distributions.", "Table 3: Annotator agreement (percentage overlap) and evaluation accuracy for Face++.", "Table 4: Top 10 predictable features of race and gender in the English dataset.", "Figure 1: Predictability of demographic attributes from the English data. We show the absolute percentage improvements in accuracy over majority-class baselines. The majority-class baselines of accuracy are .500 for the binary predictions. The darker color indicates higher improvements and vice versa. The improved prediction accuracy scores over majority baselines suggest that language variations across demographic groups are encoded in the text documents. The results show that documents are the most predictable to the age attribute. We can also observe that the word is the most predictable feature to demographic factors, while the POS feature is least predictable towards the country factor. These suggest there might be a connection between language variations and demographic groups. This motivates us to further explore the language variations based on word features. We rank the word features by mutual information classification (Pedregosa et al., 2011) and present the top 10 unigram features in Table 4. The qualitative results show the most predictable word features towards the demographic groups and suggest such variations may impact extracted feature representations and further training fair document classifiers.", "Table 5: Overall performance evaluation of baseline classifiers. We evaluate overall performance by four metrics including accuracy (Acc), weighted F1 score (F1-w), macro F1 score (F1-m) and area under the ROC curve (AUC). The higher score indicates better performance. We highlight models achieve the best performance in each column.", "Table 6: Fairness evaluation of baseline classifiers across the five languages on the four demographic factors. We measure fairness and bias of document classifiers by equality differences of false negative rate (FNED), false positive rate (FPED) and sum of FNED and FPED (SUM-ED). The higher score indicates lower fairness and higher bias and vice versa." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Figure1-1.png", "5-Table5-1.png", "7-Table6-1.png" ] }
[ "How large is the corpus?", "How large is the dataset?" ]
[ [ "2002.10361-2-Table1-1.png" ], [ "2002.10361-2-Table1-1.png" ] ]
[ "It contains 106,350 documents", "over 104k documents" ]
582
1810.10254
Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling
Building large-scale datasets for training code-switching language models is challenging and very expensive. To alleviate this problem using parallel corpus has been a major workaround. However, existing solutions use linguistic constraints which may not capture the real data distribution. In this work, we propose a novel method for learning how to generate code-switching sentences from parallel corpora. Our model uses a Seq2Seq model in combination with pointer networks to align and choose words from the monolingual sentences and form a grammatical code-switching sentence. In our experiment, we show that by training a language model using the augmented sentences we improve the perplexity score by 10% compared to the LSTM baseline.
{ "paragraphs": [ [ "Language mixing has been a common phenomenon in multilingual communities. It is motivated in response to social factors as a way of communication in a multicultural society. From a sociolinguistic perspective, individuals do code-switching in order to construct an optimal interaction by accomplishing the conceptual, relational-interpersonal, and discourse-presentational meaning of conversation BIBREF0 . In its practice, the variation of code-switching will vary due to the traditions, beliefs, and normative values in the respective communities. A number of studies BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 found that code-switching is not produced indiscriminately, but follows syntactic constraints. Many linguists formulated various constraints to define a general rule for code-switching BIBREF1 , BIBREF3 , BIBREF4 . However, the constraints are not enough to make a good generalization of real code-switching constraints, and they have not been tested in large-scale corpora for many language pairs.", "One of the biggest problem in code-switching is collecting large scale corpora. Speech data have to be collected from a spontaneous speech by bilingual speakers and the code-switching has to be triggered naturally during the conversation. In order to solve the data scarcity issue, code-switching data generation is useful to increase the volume and variance. A linguistics constraint-driven generation approach such as equivalent constraint BIBREF5 , BIBREF6 is not restrictive to languages with distinctive grammar structure.", "In this paper, we propose a novel language-agnostic method to learn how to generate code-switching sentences by using a pointer-generator network BIBREF7 . The model is trained from concatenated sequences of parallel sentences to generate code-switching sentences, constrained by code-switching texts. The pointer network copies words from both languages and pastes them into the output, generating code switching sentences in matrix language to embedded language and vice versa. The attention mechanism helps the decoder to generate meaningful and grammatical sentences without needing any sequence alignment. This idea is also in line with code-mixing by borrowing words from the embedded language BIBREF8 and intuitively, the copying mechanism can be seen as an end-to-end approach to translate, align, and reorder the given words into a grammatical code-switching sentence. This approach is the unification of all components in the work of BIBREF5 into a single computational model. A code-switching language model learned in this way is able to capture the patterns and constraints of the switches and mitigate the out-of-vocabulary (OOV) issue during sequence generation. By adding the generated sentences and incorporating syntactic information to the training data, we achieve better performance by INLINEFORM0 compared to an LSTM baseline BIBREF9 and INLINEFORM1 to the equivalent constraint." ], [ "The synthetic code-switching generation approach was introduced by adapting equivalence constraint on monolingual sentence pairs during the decoding step on an automatic speech recognition (ASR) model BIBREF5 . BIBREF10 explored Functional Head Constraint, which was found to be more restrictive than the Equivalence Constraint, but complex to be implemented, by using a lattice parser with a weighted finite-state transducer. BIBREF11 extended the RNN by adding POS information to the input layer and factorized output layer with a language identifier. Then, Factorized RNN networks were combined with an n-gram backoff model using linear interpolation BIBREF12 . BIBREF13 added syntactic and semantic features to the Factorized RNN networks. BIBREF14 adapted an effective curriculum learning by training a network with monolingual corpora of both languages, and subsequently train on code-switched data. A further investigation of Equivalence Constraint and Curriculum Learning showed an improvement in language modeling BIBREF6 . A multi-task learning approach was introduced to train the syntax representation of languages by constraining the language generator BIBREF9 .", "A copy mechanism was proposed to copy words directly from the input to the output using an attention mechanism BIBREF15 . This mechanism has proven to be effective in several NLP tasks including text summarization BIBREF7 , and dialog systems BIBREF16 . The common characteristic of these tasks is parts of the output are exactly the same as the input source. For example, in dialog systems the responses most of the time have appeared in the previous dialog steps." ], [ "We use a sequence to sequence (Seq2Seq) model in combination with pointer and copy networks BIBREF7 to align and choose words from the monolingual sentences and generate a code-switching sentence. The models' input is the concatenation of the two monolingual sentences, denoted as INLINEFORM0 , and the output is a code-switched sentence, denoted as INLINEFORM1 . The main assumption is that almost all, the token present in the code-switching sentence are also present in the source monolingual sentences. Our model leverages this property by copying input tokens, instead of generating vocabulary words. This approach has two major advantages: (1) the learning complexity decreases since it relies on copying instead of generating; (2) improvement in generalization, the copy mechanism could produce words from the input that are not present in the vocabulary." ], [ "Instead of generating words from a large vocabulary space using a Seq2Seq model with attention BIBREF17 , pointer-generator network BIBREF7 is proposed to copy words from the input to the output using an attention mechanism and generate the output sequence using decoders. The network is depicted in Figure FIGREF1 . For each decoder step, a generation probability INLINEFORM0 INLINEFORM1 [0,1] is calculated, which weights the probability of generating words from the vocabulary, and copying words from the source text. INLINEFORM2 is a soft gating probability to decide whether generating the next token from the decoder or copying the word from the input instead. The attention distribution INLINEFORM3 is a standard attention with general scoring BIBREF17 . It considers all encoder hidden states to derive the context vector. The vocabulary distribution INLINEFORM4 is calculated by concatenating the decoder state INLINEFORM5 and the context vector INLINEFORM6 . DISPLAYFORM0 ", "where INLINEFORM0 are trainable parameters and INLINEFORM1 is the scalar bias. The vocabulary distribution INLINEFORM2 and the attention distribution INLINEFORM3 are weighted and summed to obtain the final distribution INLINEFORM4 . The final distribution is calculated as follows: DISPLAYFORM0 ", "We use a beam search to select INLINEFORM0 -best code-switching sentences and concatenate the generated sentence with the training set to form a larger dataset. The result of the generated code-switching sentences is showed in Table TABREF6 . As our baseline, we compare our proposed method with three other models: (1) We use Seq2Seq with attention; (2) We generate sequences that satisfy Equivalence Constraint BIBREF5 . The constraint doesn't allow any switch within a crossing of two word alignments. We use FastAlign BIBREF18 as the word aligner; (3) We also form sentences using the alignments without any constraint. The number of the generated sentences are equivalent to 3-best data from the pointer-generator model. To increase the generation variance, we randomly permute each alignment to form a new sequence." ], [ "The quality of the generated code-switching sentences is evaluated using a language modeling task. Indeed, if the perplexity in this task drops consistently we can assume that the generated sentences are well-formed. Hence, we use an LSTM language model with weight tying BIBREF19 that can capture an unbounded number of context words to approximate the probability of the next word. Syntactic information such as Part-of-speech (POS) INLINEFORM0 is added to further improve the performance. The POS tags are generated phrase-wise using pretrained English and Chinese Stanford POS Tagger BIBREF20 by adding a word at a time in a unidirectional way to avoid any intervention from future information. The word and syntax unit are represented as a vector INLINEFORM1 and INLINEFORM2 respectively. Next, we concatenate both vectors and use it as an input INLINEFORM3 to an LSTM layer similar to BIBREF9 ." ], [ "In our experiment, we use a conversational Mandarin-English code-switching speech corpus called SEAME Phase II (South East Asia Mandarin-English). The data are collected from spontaneously spoken interviews and conversations in Singapore and Malaysia by bilinguals BIBREF21 . As the data preprocessing, words are tokenized using Stanford NLP toolkit BIBREF22 and all hesitations and punctuations were removed except apostrophe. The split of the dataset is identical to BIBREF9 and it is showed in Table TABREF6 ." ], [ "In this section, we present the experimental settings for pointer-generator network and language model. Our experiment, our pointer-generator model has 500-dimensional hidden states and word embeddings. We use 50k words as our vocabulary for source and target. We evaluate our pointer-generator performance using BLEU score. We take the best model as our generator and during the decoding stage, we generate 1-best and 3-best using beam search with a beam size of 5. For the input, we build a parallel monolingual corpus by translating the mixed language sequence using Google NMT to English ( INLINEFORM0 ) and Mandarin ( INLINEFORM1 ) sequences. Then, we concatenate the translated English and Mandarin sequences and assign code-switching sequences as the labels ( INLINEFORM2 ).", "The baseline language model is trained using RNNLM BIBREF23 . Then, we train our 2-layer LSTM models with a hidden size of 500 and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size for weight tying. We optimize our model using SGD with initial learning rates of INLINEFORM1 . If there is no improvement during the evaluation, we reduce the learning rate by a factor of 0.75. In each time step, we apply dropout to both embedding layer and recurrent network. The gradient is clipped to a maximum of 0.25. Perplexity measure is used in the evaluation." ], [ "UTF8gbsn The pointer-generator significantly outperforms the Seq2Seq with attention model by 3.58 BLEU points on the test set as shown in Table TABREF8 . Our language modeling result is given in Table TABREF9 . Based on the empirical result, adding generated samples consistently improve the performance of all models with a moderate margin around 10% in perplexity. After all, our proposed method still slightly outperforms the heuristic from linguistic constraint. In addition, we get a crucial gain on performance by adding syntax representation of the sequences.", "Change in data distribution: To further analyze the generated result, we observed the distribution of real code-switching data and the generated code-switching data. From Figure FIGREF15 , we can see that 1-best and real code-switching data have almost identical distributions. The distributions are left-skewed where the overall mean is less than the median. Interestingly, the distribution of the 3-best data is less skewed and generates a new set of n-grams such as “那个(that) proposal\" which was learned from other code-switching sequences. As a result, generating more samples effects the performance positively.", "Importance of Linguistic Constraint: The result in Table TABREF9 emphasizes that linguistic constraints have some significance in replicating the real code-switching patterns, specifically the equivalence constraint. There is a slight reduction in perplexity around 6 points on the test set. In addition, when we ignore the constraint, we lose performance because it still allows switches in the inversion grammar cases.", "Does the pointer-generator learn how to switch? We found that our pointer-generator model generates sentences that have not been seen before. The example in Figure FIGREF1 shows that our model is able to construct a new well-formed sentence such as “我们要去(We want to) check\". It is also shown that the pointer-generator model has the capability to learn the characteristics of the linguistic constraints from data without any word alignment between the matrix and embedded languages. On the other hand, training using 3-best data obtains better performance compared to 1-best data. We found a positive correlation from Table TABREF6 , where 3-best data is more similar to the test set in terms of segment length and number of switches compared to 1-best data. Adding more samples INLINEFORM0 may improve the performance, but it will be saturated at a certain point. One way to solve this is by using more parallel samples." ], [ "We introduce a new learning method for code-switching sentence generation using a parallel monolingual corpus that is applicable to any language pair. Our experimental result shows that adding generated sentences to the training data, effectively improves our model performance. Combining the generated samples with code-switching dataset reduces perplexity. We get further performance gain after using syntactic information of the input. In future work, we plan to explore reinforcement learning for sequence generation and employ more parallel corpora." ] ], "section_name": [ "Introduction", "Related Work", "Methodology", "Pointer-generator Network", "Language Modeling", "Corpus", "Training Setup", "Results", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "6e34b0f40598e906c0faf7e69f3c420b204047f9" ], "answer": [ { "evidence": [ "In this section, we present the experimental settings for pointer-generator network and language model. Our experiment, our pointer-generator model has 500-dimensional hidden states and word embeddings. We use 50k words as our vocabulary for source and target. We evaluate our pointer-generator performance using BLEU score. We take the best model as our generator and during the decoding stage, we generate 1-best and 3-best using beam search with a beam size of 5. For the input, we build a parallel monolingual corpus by translating the mixed language sequence using Google NMT to English ( INLINEFORM0 ) and Mandarin ( INLINEFORM1 ) sequences. Then, we concatenate the translated English and Mandarin sequences and assign code-switching sequences as the labels ( INLINEFORM2 ).", "The baseline language model is trained using RNNLM BIBREF23 . Then, we train our 2-layer LSTM models with a hidden size of 500 and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size for weight tying. We optimize our model using SGD with initial learning rates of INLINEFORM1 . If there is no improvement during the evaluation, we reduce the learning rate by a factor of 0.75. In each time step, we apply dropout to both embedding layer and recurrent network. The gradient is clipped to a maximum of 0.25. Perplexity measure is used in the evaluation." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our pointer-generator performance using BLEU score.", "The baseline language model is trained using RNNLM BIBREF23 .", "Perplexity measure is used in the evaluation.\n\n" ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "fdefb1edd80ca6c1ede539467bbc0fe8eaf244b3" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3. Language Modeling Results (in perplexity).", "UTF8gbsn The pointer-generator significantly outperforms the Seq2Seq with attention model by 3.58 BLEU points on the test set as shown in Table TABREF8 . Our language modeling result is given in Table TABREF9 . Based on the empirical result, adding generated samples consistently improve the performance of all models with a moderate margin around 10% in perplexity. After all, our proposed method still slightly outperforms the heuristic from linguistic constraint. In addition, we get a crucial gain on performance by adding syntax representation of the sequences." ], "extractive_spans": [], "free_form_answer": "Perplexity score 142.84 on dev and 138.91 on test", "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Language Modeling Results (in perplexity).", "Our language modeling result is given in Table TABREF9 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "55be293ab2ab549a9376ac24aec8631676281a5e" ], "answer": [ { "evidence": [ "In our experiment, we use a conversational Mandarin-English code-switching speech corpus called SEAME Phase II (South East Asia Mandarin-English). The data are collected from spontaneously spoken interviews and conversations in Singapore and Malaysia by bilinguals BIBREF21 . As the data preprocessing, words are tokenized using Stanford NLP toolkit BIBREF22 and all hesitations and punctuations were removed except apostrophe. The split of the dataset is identical to BIBREF9 and it is showed in Table TABREF6 ." ], "extractive_spans": [ "Mandarin", "English" ], "free_form_answer": "", "highlighted_evidence": [ "In our experiment, we use a conversational Mandarin-English code-switching speech corpus called SEAME Phase II (South East Asia Mandarin-English). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "ef33389674b956565b351d949c112fffa5a06a1e" ], "answer": [ { "evidence": [ "In this section, we present the experimental settings for pointer-generator network and language model. Our experiment, our pointer-generator model has 500-dimensional hidden states and word embeddings. We use 50k words as our vocabulary for source and target. We evaluate our pointer-generator performance using BLEU score. We take the best model as our generator and during the decoding stage, we generate 1-best and 3-best using beam search with a beam size of 5. For the input, we build a parallel monolingual corpus by translating the mixed language sequence using Google NMT to English ( INLINEFORM0 ) and Mandarin ( INLINEFORM1 ) sequences. Then, we concatenate the translated English and Mandarin sequences and assign code-switching sequences as the labels ( INLINEFORM2 )." ], "extractive_spans": [], "free_form_answer": "Parallel monolingual corpus in English and Mandarin", "highlighted_evidence": [ "For the input, we build a parallel monolingual corpus by translating the mixed language sequence using Google NMT to English ( INLINEFORM0 ) and Mandarin ( INLINEFORM1 ) sequences." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "Did they use other evaluation metrics?", "What was their perplexity score?", "What languages are explored in this paper?", "What parallel corpus did they use?" ], "question_id": [ "9257c578ee19a7d93e2fba866be7b0bf1142c393", "657edbf39c500b2446edb9cca18de2912c628b7d", "235c156d9c2adc895c9113f53c60f2dd8df45834", "fa2ffc6b4b046e17bc41e199855c4941673e2caf" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Fig. 1. Pointer Generator Networks [8]. The figure shows the example of input and 3-best generated sentences.", "Table 3. Language Modeling Results (in perplexity).", "Table 1. Data Statistics of SEAME Phase II and Generated Sequences using Pointer-generator Network [10].", "Table 2. Code-Switching Sentence Generation Results. Higher BLEU and lower perplexity (PPL) is better.", "Fig. 2. Univariate data distribution for unigram (topleft), bigram (top-right), trigram (bottom-left), and fourgram (bottom-right). The showed n-grams are sampled from 3-best data pointer-generator model." ], "file": [ "2-Figure1-1.png", "3-Table3-1.png", "3-Table1-1.png", "3-Table2-1.png", "4-Figure2-1.png" ] }
[ "What was their perplexity score?", "What parallel corpus did they use?" ]
[ [ "1810.10254-Results-0", "1810.10254-3-Table3-1.png" ], [ "1810.10254-Training Setup-0" ] ]
[ "Perplexity score 142.84 on dev and 138.91 on test", "Parallel monolingual corpus in English and Mandarin" ]
585
1703.06492
VQABQ: Visual Question Answering by Basic Questions
Taking an image and question as the input of our method, it can output the text-based answer of the query question about the given image, so called Visual Question Answering (VQA). There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the basic questions of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question. We formulate the basic questions generation problem as a LASSO optimization problem, and also propose a criterion about how to exploit these basic questions to help answer main question. Our method is evaluated on the challenging VQA dataset and yields state-of-the-art accuracy, 60.34% in open-ended task.
{ "paragraphs": [ [ "Visual Question Answering (VQA) is a challenging and young research field, which can help machines achieve one of the ultimate goals in computer vision, holistic scene understanding BIBREF1 . VQA is a computer vision task: a system is given an arbitrary text-based question about an image, and then it should output the text-based answer of the given question about the image. The given question may contain many sub-problems in computer vision, e.g.,", "Besides, in our real life there are a lot of more complicated questions that can be queried. So, in some sense, VQA can be considered as an important basic research problem in computer vision. From the above sub-problems in computer vision, we can discover that if we want to do holistic scene understanding in one step, it is probably too difficult. So, we try to divide the holistic scene understanding-task into many sub-tasks in computer vision. The task-dividing concept inspires us to do Visual Question Answering by Basic Questions (VQABQ), illustrated by Figure 1 . That means, in VQA, we can divide the query question into some basic questions, and then exploit these basic questions to help us answer the main query question. Since 2014, there has been a lot of progress in designing systems with the VQA ability BIBREF2 , BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Regarding these works, we can consider most of them as visual-attention VQA works because most of them do much effort on dealing with the image part but not the text part. However, recently there are some works BIBREF7 , BIBREF8 that try to do more effort on the question part. In BIBREF8 , authors proposed a Question Representation Update (QRU) mechanism to update the original query question to increase the accuracy of the VQA algorithm. Typically, VQA is a strongly image-question dependent issue, so we should pay equal attention to both the image and question, not only one of them. In reality, when people have an image and a given question about the image, we usually notice the keywords of the question and then try to focus on some parts of the image related to question to give the answer. So, paying equal attention to both parts is a more reasonable way to do VQA. In BIBREF7 , the authors proposed a Co-Attention mechanism, jointly utilizing information about visual and question attention, for VQA and achieved the state-of-the-art accuracy.", "The Co-Attention mechanism inspires us to build part of our VQABQ model, illustrated by Figure 2 . In the VQABQ model, there are two main modules, the basic question generation module (Module 1) and co-attention visual question answering module (Module 2). We take the query question, called the main question (MQ), encoded by Skip-Thought Vectors BIBREF9 , as the input of Module 1. In the Module 1, we encode all of the questions, also by Skip-Thought Vectors, from the training and validation sets of VQA BIBREF0 dataset as a 4800 by 215623 dimension basic question (BQ) matrix, and then solve the LASSO optimization problem, with MQ, to find the 3 BQ of MQ. These BQ are the output of Module 1. Moreover, we take the MQ, BQ and the given image as the input of Module 2, the VQA module with co-attention mechanism, and then it can output the final answer of MQ. We claim that the BQ can help Module 2 get the correct answer to increase the VQA accuracy. In this work, our main contributions are summarized below:", "The rest of this paper is organized as the following. We first talk about the motivation about this work in Section 2. In Section 3, we review the related work, and then Section 4 shortly introduces the proposed VQABQ dataset. We discuss the detailed methodology in Section 5. Finally, the experimental results are demonstrated in Section 6." ], [ "The following two important reasons motivate us to do Visual Question Answering by Basic Questions (VQABQ). First, recently most of VQA works only emphasize more on the image part, the visual features, but put less effort on the question part, the text features. However, image and question features both are important for VQA. If we only focus on one of them, we probably cannot get the good performance of VQA in the near future. Therefore, we should put our effort more on both of them at the same time. In BIBREF7 , they proposed a novel co-attention mechanism that jointly performs image-guided question attention and question-guided image attention for VQA. BIBREF7 also proposed a hierarchical architecture to represent the question, and construct image-question co-attention maps at the word level, phrase level and question level. Then, these co-attended features are combined with word level, phrase level and question level recursively for predicting the final answer of the query question based on the input image. BIBREF8 is also a recent work focusing on the text-based question part, text feature. In BIBREF8 , they presented a reasoning network to update the question representation iteratively after the question interacts with image content each time. Both of BIBREF7 , BIBREF8 yield better performance than previous works by doing more effort on the question part.", "Secondly, in our life , when people try to solve a difficult problem, they usually try to divide this problem into some small basic problems which are usually easier than the original problem. So, why don't we apply this dividing concept to the input question of VQA ? If we can divide the input main question into some basic questions, then it will help the current VQA algorithm achieve higher probability to get the correct answer of the main question.", "Thus, our goal in this paper is trying to generate the basic questions of the input question and then exploit these questions with the given image to help the VQA algorithm get the correct answer of the input question. Note that we can consider the generated basic questions as the extra useful information to VQA algorithm." ], [ "Recently, there are many papers BIBREF0 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 have proposed methods to solve the VQA issue. Our method involves in different areas in machine learning, natural language processing (NLP) and computer vision. The following, we discuss recent works related to our approach for solving VQA problem.", "Sequence modeling by Recurrent Neural Networks.", "Recurrent Neural Networks (RNN) can handle the sequences of flexible length. Long Short Term Memory (LSTM) BIBREF17 is a particular variant of RNN and in natural language tasks, such as machine translation BIBREF18 , BIBREF19 , LSTM is a successful application. In BIBREF14 , the authors exploit RNN and Convolutional Neural Network (CNN) to build a question generation algorithm, but the generated question sometimes has invalid grammar. The input in BIBREF3 is the concatenation of each word embedding with the same feature vector of image. BIBREF6 encodes the input question sentence by LSTM and join the image feature to the final output. BIBREF13 groups the neighbouring word and image features by doing convolution. In BIBREF20 , the question is encoded by Gated Recurrent Unit (GRU) BIBREF21 similar to LSTM and the authors also introduce a dynamic parameter layer in CNN whose weights are adaptively predicted by the encoded question feature.", "Sentence encoding.", "In order to analyze the relationship among words, phrases and sentences, several works, such as BIBREF22 , BIBREF9 , BIBREF23 , proposed methods about how to map text into vector space. After we have the vector representation of text, we can exploit the vector analysis skill to analyze the relationship among text. BIBREF22 , BIBREF23 try to map words to vector space, and if the words share common contexts in the corpus, their encoded vectors will close to each other in the vector space. In BIBREF9 , the authors propose a framework of encoder-decoder models, called skip-thoughts. In this model, the authors exploit an RNN encoder with GRU activations BIBREF21 and an RNN decoder with a conditional GRU BIBREF21 . Because skip-thoughts model emphasizes more on whole sentence encoding, in our work, we encode the whole question sentences into vector space by skip-thoughts model and use these skip-thought vectors to do further analysis of question sentences.", "Image captioning.", "In some sense, VQA is related to image captioning BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . BIBREF27 uses a language model to combine a set of possible words detected in several regions of the image and generate image description. In BIBREF26 , the authors use CNN to extract the high-level image features and considered them as the first input of the recurrent network to generate the caption of image. BIBREF24 proposes an algorithm to generate one word at a time by paying attention to local image regions related to the currently predicted word. In BIBREF25 , the deep neural network can learn to embed language and visual information into a common multi-modal space. However, the current image captioning algorithms only can generate the rough description of image and there is no so called proper metric to evaluate the quality of image caption , even though BLEU BIBREF28 can be used to evaluate the image caption.", "Attention-based VQA.", "There are several VQA models have ability to focus on specific image regions related to the input question by integrating the image attention mechanism BIBREF10 , BIBREF11 , BIBREF29 , BIBREF8 . In BIBREF8 , in the pooling step, the authors exploit an image attention mechanism to help determine the relevance between original questions and updated ones. Before BIBREF7 , no work applied language attention mechanism to VQA, but the researchers in NLP they had modeled language attention. In BIBREF7 , the authors propose a co-attention mechanism that jointly performs language attention and image attention. Because both question and image information are important in VQA, in our work we introduce co-attention mechanism into our VQABQ model." ], [ "We propose a new dataset, called Basic Question Dataset (BQD), generated by our basic question generation algorithm. BQD is the first basic question dataset. Regarding the BQD, the dataset format is $\\lbrace Image,~MQ,~3~(BQ + corresponding~similarity~score)\\rbrace $ . All of our images are from the testing images of MS COCO dataset BIBREF30 , the MQ, main questions, are from the testing questions of VQA, open-ended, dataset BIBREF0 , the BQ, basic questions, are from the training and validation questions of VQA, open-ended, dataset BIBREF0 , and the corresponding similarity score of BQ is generated by our basic question generation method, referring to Section 5. Moreover, we also take the multiple-choice questions in VQA dataset BIBREF0 to do the same thing as above. Note that we remove the repeated questions in the VQA dataset, so the total number of questions is slightly less than VQA dataset BIBREF0 . In BQD, we have 81434 images, 244302 MQ and 732906 (BQ + corresponding similarity score). At the same time, we also exploit BQD to do VQA and achieve the competitive accuracy compared to state-of-the-art." ], [ "In Section 5, we mainly discuss how to encode questions and generate BQ and why we exploit the Co-Attention Mechanism VQA algorithm BIBREF7 to answer the query question. The overall architecture of our VQABQ model can be referred to Figure 2 . The model has two main parts, Module 1 and Module 2. Regarding Module 1, it takes the encoded MQ as input and uses the matrix of the encoded BQ to output the BQ of query question. Then, the Module 2 is a VQA algorithm with the Co-Attention Mechanism BIBREF7 , and it takes the output of Module 1, MQ, and the given image as input and then outputs the final answer of MQ. The detailed architecture of Module 1 can be referred to Figure 2 ." ], [ "There are many popular text encoders, such as Word2Vec BIBREF23 , GloVe BIBREF22 and Skip-Thoughts BIBREF9 . In these encoders, Skip-Thoughts not only can focus on the word-to-word meaning but also the whole sentence semantic meaning. So, we choose Skip-Thoughts to be our question encoding method. In Skip-Thoughts model, it uses an RNN encoder with GRU BIBREF21 activations, and then we use this encoder to map an English sentence into a vector. Regarding GRU, it has been shown to perform as well as LSTM BIBREF17 on the sequence modeling applications but being conceptually simpler because GRU units only have 2 gates and do not need the use of a cell.", "Question encoder. Let $w_{i}^{1},...,w_{i}^{N}$ be the words in question $s_{i}$ and N is the total number of words in $s_{i}$ . Note that $w_{i}^{t}$ denotes the $t$ -th word for $s_{i}$ and $\\mathbf {x}_{i}^t$ denotes its word embedding. The question encoder at each time step generates a hidden state $\\mathbf {h}_{i}^{t}$ . It can be considered as the representation of the sequence $w_{i}^{1},..., w_{i}^{t}$ . So, the hidden state $\\mathbf {h}_{i}^{N}$ can represent the whole question. For convenience, here we drop the index $s_{i}$0 and iterate the following sequential equations to encode a question: ", "$$\\mathbf {r}^{t}~=~\\sigma (\\mathbf {U}_{r}\\mathbf {h}^{t-1}+\\mathbf {W}_{r}\\mathbf {x}^{t})$$ (Eq. 12) ", "$$\\mathbf {z}^{t}~=~\\sigma (\\mathbf {U}_{z}\\mathbf {h}^{t-1}+\\mathbf {W}_{z}\\mathbf {x}^{t})$$ (Eq. 13) ", ", where $\\mathbf {U}_{r}$ , $\\mathbf {U}_{z}$ , $\\mathbf {W}_{r}$ , $\\mathbf {W}_{z}$ , $\\mathbf {U}$ and $\\mathbf {W}$ are the matrices of weight parameters. $\\bar{\\mathbf {h}}^{t}$ is the state update at time step $t$ , $\\mathbf {r}^{t}$ is the reset gate, $\\odot $ denotes an element-wise product and $\\mathbf {U}_{z}$0 is the update gate. These two update gates take the values between zero and one." ], [ "Our idea is the BQ generation for MQ and, at the same time, we only want the minimum number of BQ to represent the MQ, so modeling our problem as $LASSO$ optimization problem is an appropriate way: ", "$$\\min _{\\mathbf {x}}~\\frac{1}{2}\\left\\Vert A\\mathbf {x}-\\mathbf {b} \\right\\Vert _{2}^{2}+\\lambda \\left\\Vert \\mathbf {x} \\right\\Vert _{1}$$ (Eq. 17) ", ", where $A$ is the matrix of encoded BQ, $\\mathbf {b}$ is the encode MQ and $\\lambda $ is a parameter of the regularization term." ], [ "We now describe how to generate the BQ of a query question, illustrated by Figure 2 . Note that the following we only describe the open-ended question case because the multiple-choice case is same as open-ended one. According to Section 5.2, we can encode the all questions from the training and validation questions of VQA dataset BIBREF0 by Skip-Thought Vectors, and then we have the matrix of these encoded basic questions. Each column of the matrix is the vector representation, 4800 by 1 dimensions, of a basic question and we have 215623 columns. That is, the dimension of BQ matrix, called $A$ , is 4800 by 215623. Also, we encode the query question as a column vector, 4800 by 1 dimensions, by Skip-Thought Vectors, called $\\mathbf {b}$ . Now, we can solve the $LASSO$ optimization problem, mentioned in Section 5.3, to get the solution, $\\mathbf {x}$ . Here, we consider the elements, in solution vector $\\mathbf {x}$ , as the weights of the corresponding BQ in BQ matrix, $A$ . The first element of $\\mathbf {x}$ corresponds to the first column, i.e. the first BQ, of $A$ . Then, we rank the all weights in $\\mathbf {x}$ and pick up the top 3 large weights with corresponding BQ to be the BQ of the query question. Intuitively, because BQ are important to MQ, the weights of BQ also can be considered as importance scores and the BQ with larger weight means more important to MQ. Finally, we find the BQ of all 142093 testing questions from VQA dataset and collect them together, with the format $\\lbrace Image,~MQ,~3~(BQ + corresponding~ similarity~score)\\rbrace $ , as the BQD in Section 4." ], [ "In this section, we propose a criterion to use these BQ. In BQD, each MQ has three corresponding BQ with scores. We can have the following format, $\\lbrace MQ,(BQ1,~score1),(BQ2,~score2),(BQ3,~score3)\\rbrace $ , and these scores are all between 0 and 1 with the following order, ", "$$score1\\ge score2\\ge score3$$ (Eq. 20) ", "and we define 3 thresholds, $s1$ , $s2$ and $s3$ . Also, we compute the following 3 averages ( $avg$ ) and 3 standard deviations ( $std$ ) to $score1$ , $score2/score1$ and $score3/score2$ , respectively, and then use $avg \\pm std$ , referring to Table 3 , to be the initial guess of proper thresholds. The BQ utilization process can be explained as Table 1 . The detailed discussion about BQ concatenation algorithm is described in the Section 6.4." ], [ "There are two types of Co-Attention Mechanism BIBREF7 , Parallel and Alternating. In our VQABQ model, we only use the VQA algorithm with Alternating Co-Attention Mechanism to be our VQA module, referring to Figure 2 , because, in BIBREF7 , Alternating Co-Attention Mechanism VQA module can get the higher accuracy than the Parallel one. Moreover, we want to compare with the VQA method, Alternating one, with higher accuracy in BIBREF7 . In Alternating Co-Attention Mechanism, it sequentially alternates between generating question and image attention. That is, this mechanism consists of three main steps:", "First, the input question is summarized into a single vector $\\mathbf {q}$ .", "Second, attend to the given image depended on $\\mathbf {q}$ .", "Third, attend to the question depended on the attended image feature.", "We can define $\\hat{\\mathbf {x}}$ is an attention operator, which is a function of $\\mathbf {X}$ and $\\mathbf {g}$ . This operator takes the question (or image) feature $\\mathbf {X}$ and attention guider $\\mathbf {g}$ derived from image (or question) as inputs, and then outputs the attended question (or image) vector. We can explain the above operation as the following steps: ", "$$\\mathbf {H}~=~\\rm {tanh}(\\mathbf {W}_{x}\\mathbf {X}+(\\mathbf {W}_{g}g)\\mathbf {1}^{T})$$ (Eq. 26) ", "$$\\mathbf {a}^{x}~=~\\rm {softmax}(\\mathbf {w}_{hx}^{T}\\mathbf {H})$$ (Eq. 27) ", ", where $\\mathbf {a}^{x}$ is the attention weight of feature $\\mathbf {X}$ , $\\mathbf {1}$ is a vector whose elements are all equal to 1, and $\\mathbf {W}_{g}$ , $\\mathbf {W}_{x}$ and $\\mathbf {w}_{hx}$ are matrices of parameters.", "Concretely, at the first step of Alternating Co-Attention Mechanism, $\\mathbf {g}$ is 0 and $\\mathbf {X} = \\mathbf {Q}$ . Then, at the second step, $\\mathbf {X} = \\mathbf {V}$ where $\\mathbf {V}$ is the image features and the guider, $\\mathbf {g}$ , is intermediate attended question feature, $\\hat{s}$ , which is from the first step. At the final step, it uses the attended image feature, $\\hat{v}$ , as the guider to attend the question again. That is, $\\mathbf {X} = \\mathbf {Q}$ and $\\mathbf {g} = \\hat{v}$ ." ], [ "In Section 6, we describe the details of our implementation and discuss the experiment results about the proposed method." ], [ "We conduct our experiments on VQA BIBREF0 dataset. VQA dataset is based on the MS COCO dataset BIBREF30 and it contains the largest number of questions. There are questions, 248349 for training, 121512 for validation and 244302 for testing. In the VQA dataset, each question is associated with 10 answers annotated by different people from Amazon Mechanical Turk (AMT). About 98% of answers do not exceed 3 words and 90% of answers have single words. Note that we only test our method on the open-ended case in VQA dataset because it has the most open-ended questions among the all available dataset and we also think open-ended task is closer to the real situation than multiple-choice one." ], [ "In order to prove our claim that BQ can help accuracy and compare with the state-of-the-art VQA method BIBREF7 , so, in our Module 2, we use the same setting, dataset and source code mentioned in BIBREF7 . Then, the Module 1 in VQABQ model, is our basic question generation module. In other words, in our model ,the only difference compared to BIBREF7 is our Module 1, illustrated by Figure 2 ." ], [ "VQA dataset provides multiple-choice and open-ended task for evaluation. Regarding open-ended task, the answer can be any phrase or word. However, in multiple-choice task, an answer should be chosen from 18 candidate answers. For both cases, answers are evaluated by accuracy which can reflect human consensus. The accuracy is given by the following: ", "$$Accuracy_{_{VQA}}=\\frac{1}{N}\\sum _{i=1}^{N}\\min \\left\\lbrace \\frac{\\sum _{t\\in T_{i}}\\mathbb {I}[a_{i}=t]}{3},1 \\right\\rbrace $$ (Eq. 36) ", ", where $N$ is the total number of examples, $\\mathbb {I}[\\cdot ]$ denotes an indicator function, $a_{i}$ is the predicted answer and $T_{i}$ is an answer set of the $i^{th}$ example. That is, a predicted answer is considered as a correct one if at least 3 annotators agree with it, and the score depends on the total number of agreements when the predicted answer is not correct." ], [ "Here, we describe our final results and analysis by the following parts:", "Does Basic Question Help Accuracy ?", "The answer is yes. Here we only discuss the open-ended case. In our experiment, we use the $avg\\pm std$ , referring to Table 3 , to be the initial guess of proper thresholds of s1, s2 and s3, in Table 1 . We discover that when s1 = 0.43, s2 = 0.82 and s3 = 0.53, we can get the better utilization of BQ. The threshold, s1 = 0.43, can be consider as 43% of testing questions from VQA dataset which cannot find the basic question, from the training and validation sets of VQA dataset, and only 57% of testing questions can find the basic questions. Note that we combine the training and validation sets of VQA dataset to be our basic question dataset. Regarding s2 = 0.82, that means 82% of those 57% testing questions, i.e. 46.74%, only can find 1 basic question, and 18% of those 57% testing questions, i.e. 10.26%, can find at least 2 basic questions. Furthermore, s3 = 0.53 means that 53% of those 10.26% testing question, i.e. around 5.44%, only can find 2 basic questions, and 47% of those 10.26% testing question, i.e. around 4.82%, can find 3 basic questions. The above detail can be referred to Table 2 .", "Accordingly to the Table 2 , 43% of testing questions from VQA dataset cannot find the proper basic questions from VQA training and validation datasets, and there are some failed examples about this case in Table 6 . We also discover that a lot of questions in VQA training and validation datasets are almost the same. This issue reduces the diversity of basic question dataset. Although we only have 57% of testing questions can benefit from the basic questions, our method still can improve the state-of-the-art accuracy BIBREF7 from 60.32% to 60.34%, referring to Table 4 and 5 . Then, we have 142093 testing questions, so that means the number of correctly answering questions of our method is more than state-of-the-art method 28 questions. In other words, if we have well enough basic question dataset, we can increase accuracy more, especially in the counting-type question, referring to Table 4 and 5 . Because the Co-Attention Mechanism is good at localizing, the counting-type question is improved more than others. So, based on our experiment, we can conclude that basic question can help accuracy obviously.", "Comparison with State-of-the-art.", "Recently, BIBREF7 proposed the Co-Attention Mechanism in VQA and got the state-of-the-art accuracy. However, when we use their code and the same setup mentioned in their paper to re-run the experiment, we cannot get the same accuracy reported in their work. The re-run results are presented in Table 5 . So, under the fair conditions, our method is competitive compared to the state-of-the-art." ], [ "In this paper, we propose a VQABQ model for visual question answering. The VQABQ model has two main modules, Basic Question Generation Module and Co-Attention VQA Module. The former one can generate the basic questions for the query question, and the latter one can take the image , basic and query question as input and then output the text-based answer of the query question. According to the Section 6.4, because the basic question dataset generated from VQA dataset is not well enough, we only have the 57% of all testing questions can benefit from the basic questions. However, we still can increase 28 correctly answering questions compared to the state-of-the-art. We believe that if our basic question dataset is well enough, the increment of accuracy will be much more.", "According to the previous state-of-the-art methods in VQA, they all got the highest accuracy in the Yes/No-type question. So, how to effectively only exploit the Yes/No-type basic questions to do VQA will be an interesting work, illustrated by Figure 3 . Also, how to generate other specific type of basic questions based on the query question and how to do better combination of visual and textual features in order to decrease the semantic inconsistency? The above future works will be our next research focus." ], [ "This work is supported by competitive research funding from King Abdullah University of Science and Technology (KAUST). Also, we would like to acknowledge Fabian Caba, Humam Alwassel and Adel Bibi. They always can provide us helpful discussion about this work." ] ], "section_name": [ "Introduction", "Motivations", "Related Work", "Basic Question Dataset", "Methodology", "Question encoding", "Problem Formulation", "Basic Question Generation", "Basic Question Concatenation", "Co-Attention Mechanism", "Experiment", "Datasets", "Setup", "Evaluation Metrics", "Results and Analysis", "Conclusion and Future Work", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "e3aa84e3788c28eab6a7e9354580f34bebd4a36c" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4. Evaluation results on VQA dataset [1]. ”-” indicates the results are not available, and the Ours+VGG(1) and Ours+VGG(2) are the results by using different thresholds. Note that our VGGNet is same as CoAtt+VGG." ], "extractive_spans": [], "free_form_answer": "in open-ended task esp. for counting-type questions ", "highlighted_evidence": [ "FLOAT SELECTED: Table 4. Evaluation results on VQA dataset [1]. ”-” indicates the results are not available, and the Ours+VGG(1) and Ours+VGG(2) are the results by using different thresholds. Note that our VGGNet is same as CoAtt+VGG." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "55f0c5675fc4c77053e58a576af036d4145ba703" ], "answer": [ { "evidence": [ "Accordingly to the Table 2 , 43% of testing questions from VQA dataset cannot find the proper basic questions from VQA training and validation datasets, and there are some failed examples about this case in Table 6 . We also discover that a lot of questions in VQA training and validation datasets are almost the same. This issue reduces the diversity of basic question dataset. Although we only have 57% of testing questions can benefit from the basic questions, our method still can improve the state-of-the-art accuracy BIBREF7 from 60.32% to 60.34%, referring to Table 4 and 5 . Then, we have 142093 testing questions, so that means the number of correctly answering questions of our method is more than state-of-the-art method 28 questions. In other words, if we have well enough basic question dataset, we can increase accuracy more, especially in the counting-type question, referring to Table 4 and 5 . Because the Co-Attention Mechanism is good at localizing, the counting-type question is improved more than others. So, based on our experiment, we can conclude that basic question can help accuracy obviously." ], "extractive_spans": [ "our method still can improve the state-of-the-art accuracy BIBREF7 from 60.32% to 60.34%" ], "free_form_answer": "", "highlighted_evidence": [ "our method still can improve the state-of-the-art accuracy BIBREF7 from 60.32% to 60.34%", "Although we only have 57% of testing questions can benefit from the basic questions, our method still can improve the state-of-the-art accuracy BIBREF7 from 60.32% to 60.34%, " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "fc3d2ab844b5fa234fffc61590c32a946c30b3e0" ], "answer": [ { "evidence": [ "Our idea is the BQ generation for MQ and, at the same time, we only want the minimum number of BQ to represent the MQ, so modeling our problem as $LASSO$ optimization problem is an appropriate way:" ], "extractive_spans": [], "free_form_answer": "LASSO optimization problem", "highlighted_evidence": [ "Our idea is the BQ generation for MQ and, at the same time, we only want the minimum number of BQ to represent the MQ, so modeling our problem as $LASSO$ optimization problem is an appropriate way" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "afa711b71b2236c63db7b21e77b4288b99f28bf9" ], "answer": [ { "evidence": [ "The Co-Attention mechanism inspires us to build part of our VQABQ model, illustrated by Figure 2 . In the VQABQ model, there are two main modules, the basic question generation module (Module 1) and co-attention visual question answering module (Module 2). We take the query question, called the main question (MQ), encoded by Skip-Thought Vectors BIBREF9 , as the input of Module 1. In the Module 1, we encode all of the questions, also by Skip-Thought Vectors, from the training and validation sets of VQA BIBREF0 dataset as a 4800 by 215623 dimension basic question (BQ) matrix, and then solve the LASSO optimization problem, with MQ, to find the 3 BQ of MQ. These BQ are the output of Module 1. Moreover, we take the MQ, BQ and the given image as the input of Module 2, the VQA module with co-attention mechanism, and then it can output the final answer of MQ. We claim that the BQ can help Module 2 get the correct answer to increase the VQA accuracy. In this work, our main contributions are summarized below:" ], "extractive_spans": [ "the basic question generation module (Module 1) and co-attention visual question answering module (Module 2)" ], "free_form_answer": "", "highlighted_evidence": [ "In the VQABQ model, there are two main modules, the basic question generation module (Module 1) and co-attention visual question answering module (Module 2). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "In which setting they achieve the state of the art?", "What accuracy do they approach with their proposed method?", "What they formulate the question generation as?", "What two main modules their approach consists of?" ], "question_id": [ "0c7823b27326b3f5dff51f32f45fc69c91a4e06d", "84a4a1f4695eba599d447e030c94f51e5f2f03bb", "785eb3c7c5a5c27db14006ac357299ed1216313a", "bf6c14e9c5f476062cbaaf9179b0c9b751222c8f" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "Question Answering", "Question Answering", "Question Answering", "Question Answering" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1. Examples of basic questions. Note that MQ denotes the main question and BQ denotes the basic question.", "Figure 2. VQABQ working pipeline. Note that all of the training and validation questions are only encoded by Skip-Thoughts one time for generating the basic question matrix. That is, the next input of Skip-Thoughts is only the new main question. Here, ”⊕” denotes the proposed basic question concatenation method.", "Table 2. We only show the open-ended case of VQA dataset [1], and ”# Q” denoted number of questions.", "Table 3. ”avg” denotes average and ”std” denotes standard deviation.", "Table 1. Note that appending BQ means doing the concatenation with MQ.", "Table 4. Evaluation results on VQA dataset [1]. ”-” indicates the results are not available, and the Ours+VGG(1) and Ours+VGG(2) are the results by using different thresholds. Note that our VGGNet is same as CoAtt+VGG.", "Table 5. Re-run evaluation results on VQA dataset [1]. ”-” indicates the results are not available. Note that the result of [14] in Table 5 is lower than in Table 4, and CoAtt+VGG is same as our VGGNet. According to the re-run results, our method has the higher accuracy, especially in the counting-type question.", "Table 6. Some failed examples about finding no basic question.", "Figure 3. Some future work examples." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "5-Table2-1.png", "5-Table3-1.png", "5-Table1-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "8-Figure3-1.png" ] }
[ "In which setting they achieve the state of the art?" ]
[ [ "1703.06492-6-Table4-1.png" ] ]
[ "in open-ended task esp. for counting-type questions " ]
587
1701.08118
Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis
Some users of social media are spreading racist, sexist, and otherwise hateful content. For the purpose of training a hate speech detection system, the reliability of the annotations is crucial, but there is no universally agreed-upon definition. We collected potentially hateful messages and asked two groups of internet users to determine whether they were hate speech or not, whether they should be banned or not and to rate their degree of offensiveness. One of the groups was shown a definition prior to completing the survey. We aimed to assess whether hate speech can be annotated reliably, and the extent to which existing definitions are in accordance with subjective ratings. Our results indicate that showing users a definition caused them to partially align their own opinion with the definition but did not improve reliability, which was very low overall. We conclude that the presence of hate speech should perhaps not be considered a binary yes-or-no decision, and raters need more detailed instructions for the annotation.
{ "paragraphs": [ [ "Social media are sometimes used to disseminate hateful messages. In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis. Lawmakers and social media sites are increasingly aware of the problem and are developing approaches to deal with it, for example promising to remove illegal messages within 24 hours after they are reported BIBREF0 .", "This raises the question of how hate speech can be detected automatically. Such an automatic detection method could be used to scan the large amount of text generated on the internet for hateful content and report it to the relevant authorities. It would also make it easier for researchers to examine the diffusion of hateful content through social media on a large scale.", "From a natural language processing perspective, hate speech detection can be considered a classification task: given an utterance, determine whether or not it contains hate speech. Training a classifier requires a large amount of data that is unambiguously hate speech. This data is typically obtained by manually annotating a set of texts based on whether a certain element contains hate speech.", "The reliability of the human annotations is essential, both to ensure that the algorithm can accurately learn the characteristics of hate speech, and as an upper bound on the expected performance BIBREF1 , BIBREF2 . As a preliminary step, six annotators rated 469 tweets. We found that agreement was very low (see Section 3). We then carried out group discussions to find possible reasons. They revealed that there is considerable ambiguity in existing definitions. A given statement may be considered hate speech or not depending on someone's cultural background and personal sensibilities. The wording of the question may also play a role.", "We decided to investigate the issue of reliability further by conducting a more comprehensive study across a large number of annotators, which we present in this paper.", "Our contribution in this paper is threefold:" ], [ "For the purpose of building a classifier, warner2012 define hate speech as “abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation”. More recent approaches rely on lists of guidelines such as a tweet being hate speech if it “uses a sexist or racial slur” BIBREF2 . These approaches are similar in that they leave plenty of room for personal interpretation, since there may be differences in what is considered offensive. For instance, while the utterance “the refugees will live off our money” is clearly generalising and maybe unfair, it is unclear if this is already hate speech. More precise definitions from law are specific to certain jurisdictions and therefore do not capture all forms of offensive, hateful speech, see e.g. matsuda1993. In practice, social media services are using their own definitions which have been subject to adjustments over the years BIBREF3 . As of June 2016, Twitter bans hateful conduct.", "With the rise in popularity of social media, the presence of hate speech has grown on the internet. Posting a tweet takes little more than a working internet connection but may be seen by users all over the world.", "Along with the presence of hate speech, its real-life consequences are also growing. It can be a precursor and incentive for hate crimes, and it can be so severe that it can even be a health issue BIBREF4 . It is also known that hate speech does not only mirror existing opinions in the reader but can also induce new negative feelings towards its targets BIBREF5 . Hate speech has recently gained some interest as a research topic on the one hand – e.g. BIBREF6 , BIBREF4 , BIBREF7 – but also as a problem to deal with in politics such as the No Hate Speech Movement by the Council of Europe.", "The current refugee crisis has made it evident that governments, organisations and the public share an interest in controlling hate speech in social media. However, there seems to be little consensus on what hate speech actually is." ], [ "As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus. We used Twitter as a source as it offers recent comments on current events. In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links. This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them.", "To find a large amount of hate speech on the refugee crisis, we used 10 hashtags that can be used in an insulting or offensive way. Using these hashtags we gathered 13 766 tweets in total, roughly dating from February to March 2016. However, these tweets contained a lot of non-textual content which we filtered out automatically by removing tweets consisting solely of links or images. We also only considered original tweets, as retweets or replies to other tweets might only be clearly understandable when reading both tweets together. In addition, we removed duplicates and near-duplicates by discarding tweets that had a normalised Levenshtein edit distance smaller than .85 to an aforementioned tweet. A first inspection of the remaining tweets indicated that not all search terms were equally suited for our needs. The search term #Pack (vermin or lowlife) found a potentially large amount of hate speech not directly linked to the refugee crisis. It was therefore discarded. As a last step, the remaining tweets were manually read to eliminate those which were difficult to understand or incomprehensible. After these filtering steps, our corpus consists of 541 tweets, none of which are duplicates, contain links or pictures, or are retweets or replies.", "As a first measurement of the frequency of hate speech in our corpus, we personally annotated them based on our previous expertise. The 541 tweets were split into six parts and each part was annotated by two out of six annotators in order to determine if hate speech was present or not. The annotators were rotated so that each pair of annotators only evaluated one part. Additionally the offensiveness of a tweet was rated on a 6-point Likert scale, the same scale used later in the study.", "Even among researchers familiar with the definitions outlined above, there was still a low level of agreement (Krippendorff's INLINEFORM0 ). This supports our claim that a clearer definition is necessary in order to be able to train a reliable classifier. The low reliability could of course be explained by varying personal attitudes or backgrounds, but clearly needs more consideration." ], [ "In order to assess the reliability of the hate speech definitions on social media more comprehensively, we developed two online surveys in a between-subjects design. They were completed by 56 participants in total (see Table TABREF7 ). The main goal was to examine the extent to which non-experts agree upon their understanding of hate speech given a diversity of social media content. We used the Twitter definition of hateful conduct in the first survey. This definition was presented at the beginning, and again above every tweet. The second survey did not contain any definition. Participants were randomly assigned one of the two surveys.", "The surveys consisted of 20 tweets presented in a random order. For each tweet, each participant was asked three questions. Depending on the survey, participants were asked (1) to answer (yes/no) if they considered the tweet hate speech, either based on the definition or based on their personal opinion. Afterwards they were asked (2) to answer (yes/no) if the tweet should be banned from Twitter. Participants were finally asked (3) to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive.", "After the annotation of the 20 tweets, participants were asked to voluntarily answer an open question regarding the definition of hate speech. In the survey with the definition, they were asked if the definition of Twitter was sufficient. In the survey without the definition, the participants were asked to suggest a definition themselves. Finally, sociodemographic data were collected, including age, gender and more specific information regarding the participant's political orientation, migration background, and personal position regarding the refugee situation in Europe.", "The surveys were approved by the ethical committee of the Department of Computer Science and Applied Cognitive Science of the Faculty of Engineering at the University of Duisburg-Essen." ], [ "Since the surveys were completed by 56 participants, they resulted in 1120 annotations. Table TABREF7 shows some summary statistics.", "To assess whether the definition had any effect, we calculated, for each participant, the percentage of tweets they considered hate speech or suggested to ban and their mean offensiveness rating. This allowed us to compare the two samples for each of the three questions. Preliminary Shapiro-Wilk tests indicated that some of the data were not normally distributed. We therefore used the Wilcoxon-Mann-Whitney (WMW) test to compare the three pairs of series. The results are reported in Table TABREF7 .", "Participants who were shown the definition were more likely to suggest to ban the tweet. In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%). This suggests that participants in that group aligned their own opinion with the definition.", "We chose Krippendorff's INLINEFORM0 to assess reliability, a measure from content analysis, where human coders are required to be interchangeable. Therefore, it measures agreement instead of association, which leaves no room for the individual predilections of coders. It can be applied to any number of coders and to interval as well as nominal data. BIBREF8 ", "This allowed us to compare agreement between both groups for all three questions. Figure FIGREF8 visualises the results. Overall, agreement was very low, ranging from INLINEFORM0 to INLINEFORM1 . In contrast, for the purpose of content analysis, Krippendorff recommends a minimum of INLINEFORM2 , or a minimum of INLINEFORM3 for applications where some uncertainty is unproblematic BIBREF8 . Reliability did not consistently increase when participants were shown a definition.", "To measure the extent to which the annotations using the Twitter definition (question one in group one) were in accordance with participants' opinions (question one in group two), we calculated, for each tweet, the percentage of participants in each group who considered it hate speech, and then calculated Pearson's correlation coefficient. The two series correlate strongly ( INLINEFORM0 ), indicating that they measure the same underlying construct." ], [ "This paper describes the creation of our hate speech corpus and offers first insights into the low agreement among users when it comes to identifying hateful messages. Our results imply that hate speech is a vague concept that requires significantly better definitions and guidelines in order to be annotated reliably. Based on the present findings, we are planning to develop a new coding scheme which includes clear-cut criteria that let people distinguish hate speech from other content.", "Researchers who are building a hate speech detection system might want to collect multiple labels for each tweet and average the results. Of course this approach does not make the original data any more reliable BIBREF8 . Yet, collecting the opinions of more users gives a more detailed picture of objective (or intersubjective) hatefulness. For the same reason, researchers might want to consider hate speech detection a regression problem, predicting, for example, the degree of hatefulness of a message, instead of a binary yes-or-no classification task.", "In the future, finding the characteristics that make users consider content hateful will be useful for building a model that automatically detects hate speech and users who spread hateful content, and for determining what makes users disseminate hateful content." ], [ "This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant No. GRK 2167, Research Training Group ”User-Centred Social Media”." ] ], "section_name": [ "Introduction", "Hate Speech", "Compiling A Hate Speech Corpus", "Methods", "Preliminary Results and Discussion", "Conclusion and Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "f1e8a4a248ed425ebe2bb19c2036df5d051020b5" ], "answer": [ { "evidence": [ "As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus. We used Twitter as a source as it offers recent comments on current events. In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links. This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them." ], "extractive_spans": [ "German" ], "free_form_answer": "", "highlighted_evidence": [ "As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe. We therefore had to compile our own corpus." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "f2591bc9683b3ffe748adee50827878683cd8f13" ], "answer": [ { "evidence": [ "Even among researchers familiar with the definitions outlined above, there was still a low level of agreement (Krippendorff's INLINEFORM0 ). This supports our claim that a clearer definition is necessary in order to be able to train a reliable classifier. The low reliability could of course be explained by varying personal attitudes or backgrounds, but clearly needs more consideration." ], "extractive_spans": [ "level of agreement (Krippendorff's INLINEFORM0 )" ], "free_form_answer": "", "highlighted_evidence": [ "Even among researchers familiar with the definitions outlined above, there was still a low level of agreement (Krippendorff's INLINEFORM0 )." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "a51c08f7412b4c967c08873e1706b77cc42cfa40" ], "answer": [ { "evidence": [ "Participants who were shown the definition were more likely to suggest to ban the tweet. In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%). This suggests that participants in that group aligned their own opinion with the definition." ], "extractive_spans": [ "participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%)" ], "free_form_answer": "", "highlighted_evidence": [ "Participants who were shown the definition were more likely to suggest to ban the tweet. In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%). This suggests that participants in that group aligned their own opinion with the definition." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "5631a776056b4320e55b1a671c4f15bfae68fb0c" ], "answer": [ { "evidence": [ "In order to assess the reliability of the hate speech definitions on social media more comprehensively, we developed two online surveys in a between-subjects design. They were completed by 56 participants in total (see Table TABREF7 ). The main goal was to examine the extent to which non-experts agree upon their understanding of hate speech given a diversity of social media content. We used the Twitter definition of hateful conduct in the first survey. This definition was presented at the beginning, and again above every tweet. The second survey did not contain any definition. Participants were randomly assigned one of the two surveys." ], "extractive_spans": [ "Twitter definition of hateful conduct" ], "free_form_answer": "", "highlighted_evidence": [ "We used the Twitter definition of hateful conduct in the first survey. This definition was presented at the beginning, and again above every tweet." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "59c9946749b14e93710c01f1e4c51a08a3f7f3e2" ], "answer": [ { "evidence": [ "The surveys consisted of 20 tweets presented in a random order. For each tweet, each participant was asked three questions. Depending on the survey, participants were asked (1) to answer (yes/no) if they considered the tweet hate speech, either based on the definition or based on their personal opinion. Afterwards they were asked (2) to answer (yes/no) if the tweet should be banned from Twitter. Participants were finally asked (3) to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive." ], "extractive_spans": [], "free_form_answer": "Personal thought of the annotator.", "highlighted_evidence": [ "to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "c6d9de28c4b8e21f502f5217611588cddaac847f" ], "answer": [ { "evidence": [ "To find a large amount of hate speech on the refugee crisis, we used 10 hashtags that can be used in an insulting or offensive way. Using these hashtags we gathered 13 766 tweets in total, roughly dating from February to March 2016. However, these tweets contained a lot of non-textual content which we filtered out automatically by removing tweets consisting solely of links or images. We also only considered original tweets, as retweets or replies to other tweets might only be clearly understandable when reading both tweets together. In addition, we removed duplicates and near-duplicates by discarding tweets that had a normalised Levenshtein edit distance smaller than .85 to an aforementioned tweet. A first inspection of the remaining tweets indicated that not all search terms were equally suited for our needs. The search term #Pack (vermin or lowlife) found a potentially large amount of hate speech not directly linked to the refugee crisis. It was therefore discarded. As a last step, the remaining tweets were manually read to eliminate those which were difficult to understand or incomprehensible. After these filtering steps, our corpus consists of 541 tweets, none of which are duplicates, contain links or pictures, or are retweets or replies." ], "extractive_spans": [ "10 hashtags that can be used in an insulting or offensive way" ], "free_form_answer": "", "highlighted_evidence": [ "To find a large amount of hate speech on the refugee crisis, we used 10 hashtags that can be used in an insulting or offensive way." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "paper_read": [ "", "", "", "", "", "" ], "question": [ "What languages are were included in the dataset of hateful content?", "How was reliability measured?", "How did the authors demonstrate that showing a hate speech definition caused annotators to partially align their own opinion with the definition?", "What definition was one of the groups was shown?", "Was the degree of offensiveness taken as how generally offensive the text was, or how personally offensive it was to the annotator?", "How were potentially hateful messages identified?" ], "question_id": [ "7486c9d9e6c407c0c3bc012405d689dbee072327", "0f2403fa77738bf05534d7f9d83c9dbb0a0d6140", "21df76462c76d6e2d52fb7dce573ee5336627cb5", "45be26c01e82835d9949529003c6b64f90db3d1a", "e28019afcb55c01516998554503bc1b56f923995", "551cc0401674f7c363e0018b8186a125f7b17e99" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "" ] }
{ "caption": [ "Table 1: Summary statistics with p values and effect size estimates from WMW tests. Not all participants chose to report their age or gender.", "Figure 1: Reliability (Krippendorff’s a) for the different groups and questions" ], "file": [ "4-Table1-1.png", "5-Figure1-1.png" ] }
[ "Was the degree of offensiveness taken as how generally offensive the text was, or how personally offensive it was to the annotator?" ]
[ [ "1701.08118-Methods-1" ] ]
[ "Personal thought of the annotator." ]
589
1905.09866
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also exposed how strongly human biases are encoded in vector spaces built on natural language. While finding that queen is the answer to man is to king as woman is to X leaves us in awe, papers have also reported finding analogies deeply infused with human biases, like man is to computer programmer as woman is to homemaker, which instead leave us with worry and rage. In this work we show that,often unknowingly, embedding spaces have not been treated fairly. Through a series of simple experiments, we highlight practical and theoretical problems in previous works, and demonstrate that some of the most widely used biased analogies are in fact not supported by the data. We claim that rather than striving to find sensational biases, we should aim at observing the data"as is", which is biased enough. This should serve as a fair starting point to properly address the evident, serious, and compelling problem of human bias in word embeddings.
{ "paragraphs": [ [ "Word embeddings are distributed representations of texts which capture similarities between words. Beside improving a wide variety of NLP tasks, the power of word embeddings is often also tested intrinsically. Together with the idea of training word embeddings, BIBREF0 introduced the idea of testing the soundness of embedding spaces via the analogy task. Proportional analogies are equations of the form INLINEFORM0 , or simply A is to B as C is to D. Given the terms INLINEFORM1 , the model must return the word that correctly stands for INLINEFORM2 in the given analogy. A most classic example is man is to king as woman is to X, where the model is expected to return queen, by subtracting “manness\" from the concept of king to obtain some general royalty, and then re-adding some “womanness\" to obtain the concept of queen ( INLINEFORM3 ).", "Beside this kind of magical power, however, embeddings have been shown to carry worrying biases present in our society and thus encoded in language. Recent studies BIBREF1 , BIBREF2 found that embeddings yield biased analogies such as the classic man is to doctor as woman is to nurse, or man is to computer programmer as woman is to homemaker.", "Attempts at reducing bias, either via postprocessing BIBREF1 or directly in training BIBREF3 have nevertheless left two outstanding issues: bias is still encoded implicitly BIBREF4 , and it is debatable whether we should aim at removal or rather at transparency and awareness BIBREF5 , BIBREF4 .", "With an eye to transparency, we took a closer look at the analogy structure. In the original proportional analogy implementation, all terms of the equation INLINEFORM0 are distinct BIBREF0 , BIBREF6 . In other words, the model is forced to return a different concept than the original ones. Given an analogy of the form INLINEFORM1 , the model is not allowed to yield any term INLINEFORM2 such that INLINEFORM3 , or INLINEFORM4 , or INLINEFORM5 , since the code explicitly prevents this. While this constraint is helpful when all terms of the analogy are expected to be different, it becomes a problem, and even a dangerous artifact, when the terms could or even should be the same.", "We investigate this issue using the original analogy test set BIBREF0 , and examples from the literature. We test all examples on different embedding spaces built for English, using two settings for the analogy code: when all terms must be different (as in the original, widely used, implementation), and without this constraint, meaning that any word, including the input terms, can be returned. As far as we know, this is the first work that evaluates and reports analogies in an unrestricted fashion, since the analogy code is always used as is. Our experiments and results suggest that the mainstream examples as well as the use of the analogy task itself as a tool to detect bias should be revised and reconsidered.", "Warning This work does not mean at all to downplay the presence and danger of human biases in word embeddings. On the contrary: embeddings do encode human biases, and we believe that this issue deserves the full attention of the field. However, we also believe that overemphasising and specifically seeking biases to achieve sensational results is not beneficial. It is also not necessary: what we observe naturally is worrying and sensational enough. Rather, we should aim at transparency and experimental clarity so as to ensure the fairest and most efficient treatment of the problem." ], [ "For both word2vec BIBREF0 and gensim BIBREF7 we adapted the code so that the input terms of the analogy query are allowed to be returned. Throughout this article, we use two different embedding spaces. The first is the widely used representation built on GoogleNews BIBREF8 . The second is taken from BIBREF2 , and was trained on a Reddit dataset BIBREF9 .", "We test analogies using the code with and without modification, with the aim of showing the drawbacks and dangers of constraining (and selecting) the output of analogy queries to word embeddings. The analogies we use in this article come from three sources: the original analogy dataset proposed by BIBREF0 (Section SECREF3 ), a small selection of additional analogies to highlight the need to be able to return input vectors (Section SECREF3 ), and a collection of examples found in papers that address the problem of (human) biases in word embeddings (Section SECREF4 ). We follow BIBREF0 , BIBREF1 and BIBREF2 by using 3cosadd to calculate the analogies, as shown in Equation EQREF2 :", " DISPLAYFORM0 ", "All the examples used in this article, plus any new query, can be tested on any of the embeddings in the original and modified analogy code, and through our online demo." ], [ "The original, widely used, analogy test set introduced by BIBREF0 consists of two main categories: semantic analogies (Paris is to France as Tokyo is to Japan) and morpho-syntactic analogies (car is to cars as table is to tables). Within these, examples are classified in more specific sub-categories, as shown in the left column of Table TABREF5 . In the same table we report two scores based on the Google News embeddings as well as for the reddit embeddings from BIBREF2 . Under “orig\" we report the score obtained using the original analogy code, and under “fair\" we report the score yielded by our altered version, where the query terms ( INLINEFORM0 ) can be returned.", "The results show a drastic drop in performance for the fair setting. In most cases, this is because the second term is returned as answer (man is to king as woman is to king, thus INLINEFORM0 ), but in some cases it is the third term that gets returned (big is to bigger as cold is to cold, thus INLINEFORM1 ). Results are over 50% lower in the semantic set, and the drop is even more serious in the syntactic examples, with the exception of “nationality-adj\".", "In the default evaluation set, and in the extended set proposed by BIBREF10 to cover additional linguistic relations, there are no word-pairs for which the gold target word is one of the three query words, in other words: A, B or C is the correct answer. Thus one might deem it a reasonable decision that the original analogy code does not let any of the original vectors to be returned. However, these conditions do exist, and this choice has consequences. The major consequence we observe is discussed in Section 4, and has to do with analogies affected by human bias. But even for the analogy types of Table 1, there are cases where this constraint is undesirable, due to homography. Additionally, there are other analogy types for which such constraint is utterly counterproductive, such as is-a or part-of relations." ], [ "One of the most well known analogies brought as example of human bias in word embeddings is man is to doctor as woman is to nurse BIBREF1 , BIBREF2 . This heavily biased analogy reflecting gendered stereotypes in our society, is however truly meaningful only if the system were allowed to yield “doctor” (arguably the expected answer in absence of bias) instead of “nurse”, and it doesn't. But we know that the system isn't allowed to return this candidate, since the original analogy code rules out the possibility of returning as D any of the query terms INLINEFORM0 , making it impossible to obtain man is to doctor as woman is to doctor (where INLINEFORM1 ).", "This means that the bias isn't necessarily (or at least not only) in the representations themselves, rather in the way we query them. So, what do the embedding spaces actually tell if you let them return any word in the vocabulary?", "We took a selection of mainstream, striking examples from the literature on embedding bias, and tested them fairly, without posing any constraint on the returned term, exactly as we did for all analogies in Section SECREF3 . In Table TABREF9 we report these examples, organised by the papers which discussed them, together with the returned term as reported in the paper itself, and the top two terms returned when using our modified code (1st and 2nd, respectively). Each example is tested over the same embedding space used in the corresponding paper." ], [ "What immediately stands out is that, bar a few exceptions, we do not obtain the term reported in the respective paper. One reason for this is that the model is now allowed to return the input vectors, and in most cases it does just that (especially INLINEFORM0 ).", "In Section SECREF3 , we saw how this heavily affects the results on the original analogy test, and we also discussed why it would nevertheless be beneficial to impose no restrictions on the returned answer. When analogies are used to study human bias, though, the problem is more serious: How can we claim the model is biased because it does not return doctor if the model is simply not allowed to return doctor?", "As a further constraint to the allowed output, BIBREF1 add an empirical threshold to Equation EQREF2 to ensure that terms that are too similar to INLINEFORM0 are excluded. Consequences are non-trivial. By not allowing the returned vector to be too close to the input vectors, this method basically skips potentially valid, unbiased answers until a potentially more biased answer is found. It isn't necessarily the case that more distance corresponds to more bias, but it is usually the case that less distance is akin to less bias (for example, gynecologist is a less biased answer than nurse to the query man is to doctor as woman is to X)." ], [ "A closer look at the results makes things even more worrying. If the top answer yielded by our unrestricted code is one of the input vectors (e.g. doctor), the original code would not have shown it. It would have instead yielded what we obtain as our second answer. This is what we should see in the reported analogies. However, Table TABREF9 (column Index) shows that this is not always the case.", "The threshold method of BIBREF1 described in Section SECREF10 is the cause for this outcome in their examples, as vectors higher in the rank have been excluded as too close to the input vector. Unfortunately, while surely successful over standard, factual analogy cases, this strategy turns out to be essentially a way of selecting the output.", "For example, their strategy not only excludes lovely (input term), but also magnificent as a possible answer for she is to lovely as he is to X, since the vector for magnificent is not distant enough from the vector of input term lovely. As can be seen in Table TABREF9 , lovely and magnificent would be the first and second words returned otherwise. The term brilliant is only returned in 10th position by an unrestricted search. While aiming at returning a vector distant enough from the input term might be desirable for some of the analogies, this threshold-based strategy is not fair when researching bias, as it potentially forces the exclusion of unbiased terms (in this case, after magnificent, one would find the following terms before encountering brilliant: marvelous, splendid, nice, fantastic, delightful, terrific, wonderful). In the example she is to sewing as he is to X., the threshold was strong enough to even exclude a potentially biased answer (woodworking).", " BIBREF2 also use the analogy test to demonstrate bias, starting from a pre-selection of terms to construct their queries from a variety of sources. In addition to using the original analogy code, thus missing out on what the actual returned term would be, they state that rather than reporting the top term, they hand-pick an example from the returned top-N words. While qualitatively observing and weighing the bias of a large set of returned answers makes sense, it can be misleading to cherry-pick and report very biased terms in sensitive analogies. At the very least, when reporting term-N, one should report the top-N terms to provide a more accurate picture. In Table TABREF12 , we report the top-10 candidates for asian is to engineer as black is to X in both the Reddit embeddings of BIBREF2 as well as GoogleNews, for completeness. Similarly, we now know that an unrestricted analogy search for man is to doctor as woman is to X returns doctor, but this does not provide a complete picture. Reporting the top-10 for this query as well as the top-10 for the inverted query (Table TABREF12 ) surely allows for a much better informed analysis rather than simply reporting doctor, or picking nurse." ], [ "If the analogy really is a symptom of a biased vector space, we should find similar biases for synonyms or closely related words to the input word INLINEFORM0 . However, with computer_programmer for example, this does not seem to be the case. If we use the term programmer instead of computer_programmer, homemaker is not very close (165), while for coder (13,374), developer (26,117) and hacker (56,646) it does not even appear in the top 10,000. Also, when using white instead of the less frequent and more specialised (and in a way less parallel to black) caucasian in the analogy of black is to criminal as caucasian is to X, lawful is found at position 40 instead of 13.", "In a way, examples are always cherry-picked, but when making substantial claims on observed biases, the fact that the results we obtain are due to a carefully chosen word (rather than a similar one, possibly even more frequent), should not be overlooked." ], [ "If we do not operate any manipulations on the returned vectors, neither by setting constraints nor by cherry-picking the output, we observe that in many cases, independently of the analogy type and the query terms, the model simply returns one of the input terms, and in particular INLINEFORM0 . Perhaps, this is a weakness of embeddings in modelling certain relations, or the analogy task as such is not apt at capturing them.", "Such observations relate to two points raised in previous work. First, the suggestive power of analogies should not be overestimated. It has been argued that what is observed through the analogy task might be mainly due to irrelevant neighborhood structure rather than to the vector offset that supposedly captures the analogy itself BIBREF11 , BIBREF12 . Indeed, BIBREF13 have also shown that the 3cosadd method is not able to capture all linguistic regularities present in the embeddings. Interestingly, the analogy task has not been recently used anymore to evaluate the soundness of contextualised embeddings BIBREF14 , BIBREF15 . Second, bias isn't fully captured anyway via the analogy task. In fact, BIBREF4 suggest that analogies are not quite reliable diagnostics for uncovering bias in word embeddings, since bias is anyway often encoded implicitly. As a side note, we would like to mention that in an earlier version of their paper, BIBREF18 accidentally searched for the inverse of the intended query, and still managed to find biased examples. This seems to be a further, strong, indication that strategies like this are not fully suitable to demonstrate the presence of bias in embeddings.", "If analogies might not be the most appropriate tool to capture certain relations, surely matters have been made worse by selecting results in order to prove (and emphasise) the presence of human bias. Using such sensational “party tricks\" BIBREF4 is harmful, as they get easily propagated both in science itself BIBREF19 , BIBREF20 , BIBREF21 , even outside NLP and AI BIBREF22 , as well as in popularised articles of the calibre of Nature BIBREF23 . This is even more dangerous, because of the widened pool of readers, and because such readers are usually in no position to verify the reliability of such examples.", "In any case, anyone who constructs and uses analogies to uncover human biases must do this fairly and transparently, and be aware of their limitations. In this sense, it is admirable that BIBREF5 try to better understand their results by checking them against actual job distributions between the two genders. Aiming primarily at scientific discovery rather than sensational findings is a strict pre-requisite to truly understand how and to what extent embeddings encode and reflect the biases of our society, and how to cope with this, both socially and computationally." ] ], "section_name": [ "Introduction", "Experimental details", "Not all analogies are the same", "Let women be doctors", "Constraining the output", "First or twentieth is not the same", "Computer programmer or just programmer?", "Please, use analogies fairly, and with care" ] }
{ "answers": [ { "annotation_id": [ "565b347c90ccb0779998b23cb28133de086e5e22" ], "answer": [ { "evidence": [ "For both word2vec BIBREF0 and gensim BIBREF7 we adapted the code so that the input terms of the analogy query are allowed to be returned. Throughout this article, we use two different embedding spaces. The first is the widely used representation built on GoogleNews BIBREF8 . The second is taken from BIBREF2 , and was trained on a Reddit dataset BIBREF9 ." ], "extractive_spans": [], "free_form_answer": "Word embeddings trained on GoogleNews and Word embeddings trained on Reddit dataset", "highlighted_evidence": [ "Throughout this article, we use two different embedding spaces. The first is the widely used representation built on GoogleNews BIBREF8 . The second is taken from BIBREF2 , and was trained on a Reddit dataset BIBREF9 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "infinity" ], "paper_read": [ "no" ], "question": [ "Which embeddings do they detect biases in?" ], "question_id": [ "ad5898fa0063c8a943452f79df2f55a5531035c7" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "" ], "topic_background": [ "familiar" ] }
{ "caption": [ "Table 1: Performance on the standard analogy test set (Mikolov et al. 2013a) using the original and the fair versions of the analogy code. The fair version allows for any term in the vocabulary to be returned, including the input terms, while the original one does not allow any of the input terms to be returned.", "Table 2: Results of analogy task on examples where one of the original words is the correct answer. Bold indicates the correct answer.", "Table 3: Fair results of embedding analogies. We use the same embedding set for our analysis as is used in the respective papers. “Index” denotes the position where the reported biased answer was actually found in our experiments. 1st answer and 2nd answer report what we actually find in first and second position. In brackets: the index of the reported answer as obtained using the swapped query, C is to B as A is to X.", "Table 4" ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png" ] }
[ "Which embeddings do they detect biases in?" ]
[ [ "1905.09866-Experimental details-0" ] ]
[ "Word embeddings trained on GoogleNews and Word embeddings trained on Reddit dataset" ]
590
1912.09152
Annotating and normalizing biomedical NEs with limited knowledge
Named entity recognition (NER) is the very first step in the linguistic processing of any new domain. It is currently a common process in BioNLP on English clinical text. However, it is still in its infancy in other major languages, as it is the case for Spanish. Presented under the umbrella of the PharmaCoNER shared task, this paper describes a very simple method for the annotation and normalization of pharmacological, chemical and, ultimately, biomedical named entities in clinical cases. The system developed for the shared task is based on limited knowledge, collected, structured and munged in a way that clearly outperforms scores obtained by similar dictionary-based systems for English in the past. Along with this recovering of the knowledge-based methods for NER in subdomains, the paper also highlights the key contribution of resource-based systems in the validation and consolidation of both the annotation guidelines and the human annotation practices. In this sense, some of the authors discoverings on the overall quality of human annotated datasets question the above-mentioned `official' results obtained by this system, that ranked second (0.91 F1-score) and first (0.916 F1-score), respectively, in the two PharmaCoNER subtasks.
{ "paragraphs": [ [ "Named Entity Recognition (ner) is considered a necessary first step in the linguistic processing of any new domain, as it facilitates the development of applications showing co-occurrences of domain entities, cause-effect relations among them, and, eventually, it opens the (still to be reached) possibility of understanding full text content. On the other hand, Biomedical literature and, more specifically, clinical texts, show a number of features as regards ner that pose a challenge to NLP researchers BIBREF0: (1) the clinical discourse is characterized by being conceptually very dense; (2) the number of different classes for nes is greater than traditional classes used with, for instance, newswire text; (3) they show a high formal variability for nes (actually, it is rare to find entities in their “canonical form”); and, (4) this text type contains a great number of ortho-typographic errors, due mainly to time constraints when drafted.", "Many ways to approach ner for biomedical literature have been proposed, but they roughly fall into three main categories: rule-based, dictionary-based (sometimes called knowledge-based) and machine-learning based solutions. Traditionally, the first two approaches have been the choice before the availability of Human Annotated Datasets (had), albeit rule-based approaches require (usually hand-crafted) rules to identify terms in the text, while dictionary-based approaches tend to miss medical terms not mentioned in the system dictionary BIBREF1. Nonetheless, with the creation and distribution of had as well as the development and success of supervised machine learning methods, a plethora of data-driven approaches have emerged —from Hidden Markov Models (HMMs) BIBREF2, Support Vector Machines (SVMs) BIBREF3 and Conditional Random Fields (CRFs) BIBREF4, to, more recently, those founded on neural networks BIBREF5. This fact has had an impact on knowledge-based methods, demoting them to a second plane. Besides, this situation has been favoured by claims on the uselessness of gazetteers for ner in, for example, Genomic Medicine (GM), as it was suggested by BIBREF0 [p. 26]CohenandDemner-Fushman:2014:", "One of the findings of the first BioCreative shared task was the demonstration of the long-suspected fact that gazetteers are typically of little use in GM.", "Although one might think that this view could strictly refer to the subdomain of GM and to the past —BioCreative I was a shared task held back in 2004—, we can still find similar claims today, not only referred to rule-based and dictionary-based methods, but also to stochastic ones BIBREF5.", "In this paper, in spite of previous statements, we present a system that uses rule-based and dictionary-based methods combined (in a way we prefer to call resource-based). Our final goals in the paper are two-fold: on the one hand, to describe our system, developed for the PharmaCoNER shared task, dealing with the annotation of some of the nes in health records (namely, pharmacological, chemical and biomedical entities) using a revisited version of rule- and dictionary-based approaches; and, on the other hand, to give pause for thought about the quality of datasets (and, thus, the fairness) with which systems of this type are evaluated, and to highlight the key role of resource-based systems in the validation and consolidation of both the annotation guidelines and the human annotation practices.", "In section SECREF2, we describe our initial resources and explain how they were built, and try to address the issues posed by features (1) and (2) above. Section SECREF3 depicts the core of our system and the methods we have devised to deal with text features (3) and (4). Results obtained in PharmaCoNER by our system are presented in section SECREF4. Section SECREF5 details some of our errors, but, most importantly, focusses on the errors and inconsistencies found in the evaluation dataset, given that they may shed doubts on the scores obtained by any system in the competition. Finally, we present some concluding remarks in section SECREF6." ], [ "As it is common in resource-based system development, special effort has been devoted to the creation of the set of resources used by the system. These are mainly two —a flat subset of the snomed ct medical ontology, and the library and a part of the contextual regexp grammars developed by BIBREF6 FSL:2018 for a previous competition on abbreviation resolution in clinical texts written in Spanish. The process of creation and/or adaptation of these resources is described in this section." ], [ "Although the competition proposes two different scenarios, in fact, both are guided by the snomed ct ontology —for subtask 1, entities must be identified with offsets and mapped to a predefined set of four classes (PROTEINAS, NORMALIZABLES, NO_NORMALIZABLES and UNCLEAR); for subtask 2, a list of all snomed ct ids (sctid) for entities occurring in the text must be given, which has been called concept indexing by the shared task organizers. Moreover, PharmaCoNER organizers decided to promote snomed ct substance ids over product, procedure or other possible interpretations also available in this medical ontology for a given entity. This selection must be done even if the context clearly refers to a different concept, according to the annotation guidelines (henceforth, AnnotGuide) and the praxis. Finally, PROTEINAS is ranked as the first choice for substances in this category.", "These previous decisions alone on the part of the organizers greatly simplify the task at hand, making it possible to build (carefully compiled) subsets of the entities to be annotated. This is a great advantage over open domain ner, where (like in GM) the texts may contain an infinite (and very creative indeed) number of nes. For clinical cases, although the ne density is greater, there exist highly structured terminological resources for the domain. Moreover, the set of classes to use in the annotation exercise for subtask 1 has been dramatically cut down by the organizers.", "With the above-mentioned initial constraints in mind, we have painstakingly collected, from the whole set of snomed ct terms, instances of entities as classified by the human annotators in the datasets released by the organizers and, when browsing the snomed ct web version, we have tried to use the ontological hierarchical relations to pull a complete class down from snomed ct. This way, we have gathered 80 classes —from lipids to proteins to peptides or peptide hormones, from plasminogen activators to dyes to drugs or medicaments—, that have been arranged in a ranked way so as to mimic human annotators choices. The number of entities so collected (henceforth, `primary entities') is 51,309." ], [ "Some of the entities to be annotated, specially those in abbreviated form, are ambiguous without a context. This is the case, for instance, of PCR, whose expanded forms are (among other meanings; we use only English expanded forms) `reactive protein c', `polymerase chain reaction', `cardiorespiratory arrest'. In order to deal with these cases, we use a contextual regexp rule system with a lean and simple rule formalism previously developed BIBREF6. As an exemplification, we include one rule to deal with one of the cases of the preceeding ambiguity:", "b:[il::bioquímica|en sangre|hemoglobina|", "hemograma|leucocit|parásito|plaqueta|", "prote.na|recuento|urea] - [PCR] - >", "[m=proteína]", "", "A rule has a left hand side (LHS) and a right hand side (RHS). There is a focus in the LHS (PCR, within dashes) and a left and right context (that may be empty). When the left context includes a b: (like in this case), it indicates either left or right context. The words in the context can take other qualifiers —in this case, the matching will be case insensitive (i to the left of bioquímica) and local (l), which means the disjunction of words and/or stems can be found in a distance of 40 characters (this can be modifified by the user). Hence, the rule applies, selecting the proteína expansion (in RHS) of PCR if any of the words/stems specified as local context (40 chars maximum) is matched either to the left or right of the focus term (which is usually an abbreviation).", "With no tweaking at all for the datasets in PharmaCoNER competition, the system annotates correctly 18 out of 20 occurrences of PCR in the test dataset (a precision of 0.9).", "This component of the system is important because, only when the previous abbreviation is expanded as the first string (that of a protein name), it must be annotated, according to the AnnotGuide. The same ambiguity happens with Cr, which may mean `creatinine' or `chrome'. These expansions are both NORMALIZABLES, but, obviously, their sctid is different.", "The system currently uses 104 context rules, only for abbreviations and acronyms in the clinical cases. These rules, contrary to what is commonly referred in the biomedical processing literature BIBREF5, do not require a special domain knowledge (none of the authors do have it) and can be written, most of the times, in a very straightforward way in the formalism briefly described above." ], [ "In general, dictionary-based methods rely on strict string matching over a fixed set of lexical entries from the domain. This is clearly insufficient to deal with non-canonical linguistic forms of nes as used in clinical texts. For this reason, we have devised two different solutions to this shortcoming.", "In the first place, we have munged a great number of our primary entities, in a way similar to that described in BIBREF7 FSL:2019a for gazetteers used for protected information anonymization in clinical texts. We basically transform canonical forms in other possible textual forms observed when working with biomedical texts. With such transformations, a system module converts a salt compound like clorhidrato de ciclopentolato into ciclopentolato clorhidrato, or simply the PP de potasio into its corresponding adjective potásico. Other, more complex conversions include the treatment of antibodies —for instance anticuerpo contra especie de Leishmania becomes ac. Leishmania, among other variants—, or pairs of antibiotics normally prescribed together —which have a unique sctid and whose order we handle just as the `glueing' characters. Note, incidentally, that, while the input to this pre-processing step is always a string, the output can be a regular expression, that is linked to a sctid. Plural forms are also generated through this module, that uses 45 transformations (not all equally productive). Using these transformation rules, we produce 139,150 `secondary entities', many of them regexps. As a final (simple) example of this, consider the entity antígeno CD13: after applying one of the previous string-to-regexp transformations, it is converted to:", "(?:antígeno )?CD[- ]?13", "", "With the previous regexp, the system is able to identify (and string-normalize) six different textual realizations of the same unique snomed ct term. There are more complex rules that, thus, produce many more potential strings. The important thing with this strategy is that through the generative power of these predictably-created regexps from snomed ct entities the system is able to improve its recall and overcome the limitations of traditional dictionary-based approaches.", "Secondly, to tackle with careless drafting of clinical reports, a Levenshtein edit distance library is used on the whole background dataset. The process is run once, using our secondary entities as lexicon and a general vocabulary lexicon to rule out common words in the candidate search process. We have used distances in the range 1-3 (depending on string length) for sequences up to 3 words long. The output of this process, which links forms with spelling errors with canonical ones and, thus, to sctids, can be inspected prior to its inclusion in the system lexicon, if so desired." ], [ "As such, the annotation process is very simple. The program reads the input byte stream trying to identify known entities by means of a huge regexp built through the pre-processing of the available resources. If the candidate entity is ambiguous and (at least) one contextual rule exists for it, it is applied. For the rest of the nes, the system assigns them the class and sctid found in our ranked in-memory lexicon. As already mentioned in passing, the system does not tokenize text prior to ner, a processing order that we consider the right choice for highly entity-dense texts. The data structures built during pre-processing are efficiently stored on disk for subsequent runs, so the pre-processing is redone only when resources are edited." ], [ "According to the organizers, and taking into account the ha of the tiny subset from the background dataset released to the participants, the system obtained the scores presented in table TABREF16, ranking as second best system for subtask1 and best system for subtask2 BIBREF8..", "Our results are consistent with our poor understanding of the classes for subtask 1. Having a null knowledge of Pharmacology, Biomedicine or even Chemistry, assigning classes (as requested for subtask 1) to entities is very hard, while providing a sctid (subtask 2) seems an easier goal. We will explain the point with an example entity —ácido hialurónico (`hyaluronic acid'). Using the ontological structure of snomed ct, one can find the following parent relations (just in English):", "hyaluronic acid is-a mucopolysaccharide is-a protein", "The authors have, in this case, promoted the PROTEINAS annotation for this entity, disregarding its interpretation as a replacement agent and overlooking a recommendation on polysaccharides in the AnnotGuide. Fortunately, all its interpretations share a unique sctid. The same may be true for", "haemosiderin is-a protein", "which is considered NORMALIZABLE in the test dataset. Similar cases are responsible for the lower performance on subtask 1 with respect to the more complex subtask 2.", "In spite of these human classification errors, our system scores outperform those obtained by PharmacoNER Tagger BIBREF5, a simpler system using a binary classification and a very different organization of the dataset with a smaller fragment for test (10% of the data as opposed to 25% for the official competition). In fact, our system improves their F1-score (89.06) by 1.3 points when compared with our results for the more complex PharmaCoNER subtask 1." ], [ "In this section, we perform error analysis for our system run on the test dataset. We will address both recall and precision errors, but mainly concentrate on the latter type, and on a thorough revision of mismatches between system and human annotations.", "In general, error analysis is favoured by knowledge-based methods, since it is through the understanding of the underlying reasons for an error that the system could be improved. Moreover, and differently to what happens with the current wave of artificial neural network methods, the whole annotation process —its guidelines for human annotators, the collection and appropriate structuring of resources, the adequate means to assign tags to certain entities but not to other, similar or even pertaining to the same class— must be clearly understood by the designer/developer/data architect of such systems. As a natural consequence of this attempt to mimick a task defined by humans to be performed, in the first place, also by humans, some inconsistencies, asystematic or missing assignments can be discovered, and this information is a valuable treasure not only for system developers but also for task organizers, guideline editors and future annotation campaigns, not to mention for the exactness of program evaluation results.", "Most of the error types made by the system (i.e., by the authors) in class assignment for subtask 1 have already been discussed. In the same vein, as regards subtask 2, a great number of errors come from the selection of the `product containing substance' reading from snomed ct rather to the `substance' itself. This is due to inexperience of the authors on the domain and the wrong consideration of context when tagging entities —the latter being clearly obviated in the AnnotGuide.", "In the following paragraphs, some of the most relevant inconsistencies found when performing error analysis of our system are highlighted. The list is necessarily incomplete due to space constraints, and it is geared towards the explanation of our possible errors." ], [ "Among some of the paradoxical examples in the AnnotGuide it stands out the double explicit consideration of gen (`gene'), when occurs alone in context, as both an entity to be tagged (positive rule P2 of the AnnotGuide) and a noun not to be tagged (negative rule N2). This inconsistency (and a bit of bad luck) has produced that none of the 6 occurrences as an independent noun —not introducing an entity— is tagged in the train+dev (henceforth, t+d) while the only 2 in the same context in the test dataset have been tagged. This amounts for 2 true negatives (tns) for the evaluation script." ], [ "The AnnotGuide proposal for the treatment of elliptical elements is somewhat confusing. For these cases, a longest match annotation is proposed, which is difficult to replicate automatically and not easy to remember for the human annotator. In many contexts, the annotator has made the right choice —for instance, in receptores de estrógeno y de progesterona— whereas in others do not —$|$anticuerpos anticardiolipina$|$ $|$IgG$|$ e $|$IgM$|$, with `$|$' marking the edges of the annotations. The last example occurs twice in the test dataset. Hence, the disagreement counts as 6 tns and 2 false positives (fps).", "On the other hand, there is a clear reference to food materials and nutrition in the AnnotGuide, where they are included in the class of substances. However, none of the following entities is tagged in the test dataset: azúcar (which is mandatory according to AnnotGuide and was tagged in t+d; 1 fp); almidón de maíz (also mandatory in AnnotGuide; 1 fp); and Loprofín, Aglutella, Aproten (hypoproteic nutrition products, 3 fps in total).", "There is an explicit indication in the AnnotGuide to annotate salts, with the example iron salts. However, in the context sales de litio (`lithium salts'), only the chemical element has been tagged (1 fp).", "There exist other differing-span mismatches between human and automatic annotation. These include anticuerpos anticitoplasma de neutrófilo, where the ha considers the first two words only (in one of the occurrences, 1 fp); in the text fragment b2 microglobulina, CEA y CA 19,9 normales, CA 19,9 is the correct span for the last entity (and not CA, 1 fp); A.S.T is the span selected (for A.S.T., 1 fp); finally, in the context lgM anticore only lgM has been tagged (1 fp).", "Other prominent mismatch between had and AnnotGuide is that of DNA, which is explicitly included in the AnnotGuide (sects. P2 and O1). It accounts for 2 fps.", "But perhaps one of the most common discrepancies between human and automatic annotation has to do with medicaments normally prescribed together, which have a unique sctid. Examples include amiloride/hidroclorotiazida (1 fp); and betametasona + calcipotriol (1 fp) in the test set. This situation was also observed in the t+d corpus fragment (tenofovir + emtricitabina, carbonato cálcico /colecalciferol, lopinavir/ritonavir)." ], [ "Some inconsistencies between dataset annotations have turned the authors crazy: NPT (acronym for `total parenteral nutrition, TPN') is tagged in the train+dev dataset 15 out of 21 times it occurs. The common sense of frequency in the ha of texts has led us to tag it in the background set. Unluckily, neither NPT nor its expansion have been tagged in the test dataset. This has also been the behaviour in ha for `parenteral nutrition' and `enteral nutrition' (and their corresponding acronyms) in test dataset, since these entities have not been tagged. We asked the organizers about this and other entities for which we had doubts, either because the AnnotGuide didn't cover their cases or because the ha didn't match the recommendations in the AnnotGuide. Woefully, communication with the organizers has not been very fluent on this respect. All in all, this bad decision on the part of the authors amounts for 6 fps (more than 7.5% of our fps according to evaluation script).", "For other cases, decisions that may be clearly induced from the tagging of train+dev datasets, have not been applied in the test corpus fragment. These include cadenas ligeras (5 times in t+d, 1 fp in test); enzimas hepáticas (tagged systematically in t+d, 1 fp); p53 (also tagged in t+d, 1 fp).", "Another entity that stands out is hidratos de carbono (`carbohydrates'). It is tagged twice in the t+d dataset, occurring 4 times in the set (once as HC). However, although the form carbohidratos has been annotated twice in the test set, hidratos de carbono has been not (1 fp).", "Moreover, suero (`Sodium chloride solution' or `serum') deserves its own comment. Both entity references are tagged in the train+dev datasets (although with the latter meaning it is tagged only 4 out of 12 occurrences). We decided to tag it due to its relevance. In the test dataset, it occurs 5 times with the blood material meaning, but it has only been tagged twice as such (one of them being an error, since it refers to the former meaning). Our system tagged all occurrences, but tagged also one of the instances with the former meaning as serum (3 fps).", "Finally, there are some inconsistencies within the same dataset. For example, nutricional agent Kabiven is tagged as both NORMALIZABLES (with sctid) and NO_NORMALIZABLES in the very same text. The same happens with another nutritional complement, Cernebit, this time in two different files. The perfusion solution Isoplasmal G (with a typo in the datasets —Isoplasmar G) is tagged as NORMALIZABLES and UNCLEAR. These examples reveal a vague understanding (or definition) of criteria as regards fluids and nutrition, as we pointed out at the beginning of this section." ], [ "Some of the entities occurring in the test dataset have not always been tagged. This is the case for celulosa (annotated only once but used twice, 1 fp); vimentina (same situation as previous, 1 fp); LDH (tagged 20 times in t+d but not in one of the files, 1 fp); cimetidina (1 fp); reactantes de fase aguda (2 fps; 2 other occurrences were tagged); anticuerpos antinucleares (human annotators missed 1, considered fp)." ], [ "On our refinement work with the system, some incorrect sctids have emerged. These errors impact on subtask 2 (some also on subtask 1). A large sample of them is enumerated below.", "ARP (`actividad de renina plasmática', `plasma renin activity', PRA) cannot be linked to sctid for renina, which happens twice. In the context `perfil de antigenos [sic] extraíbles del núcleo (ENA)', ENA has been tagged with sctid of the antibody (1 fp). In one of the files, tioflavina is linked to sctid of tioflavina T, but it could be tioflavina S. Thus, it should be NO_NORMALIZABLE. Harvoni is ChEBI:85082 and not <null> (1 fp). AcIgM contra CMV has a wrong sctid (1 fp). HBsAg has no sctid in the test set; it should be 22290004 (`Hepatitis B surface antigen') (1 fp).", "There are other incorrect annotations, due to inadvertent human errors, like biotina tagged as PROTEINAS or VEB (`Epstein-Barr virus') being annotated when it is not a substance. Among these mismatches between ha and system annotation, the most remarkable is the case of synonyms in active principles. For instance, the brand name drug Dekapine has been linked to `ácido valproico' in the former case and to `valproato sódico' in the latter. These terms are synonymous, but sadly they don't share sctid. Hence, this case also counts as a fp.", "A gold standard dataset for any task is very hard to develop, so a continuous editing of it is a must. In this discussion, we have focused on false positives (fps) according to the script used for system evaluation, with the main purpose of understanding the domain knowledge encoded in the linguistic conventions (lexical/terminological items and constructions) used by health professionals, but also the decisions underlying both the AnnotGuide and the ha practice.", "In this journey to system improvement and authors enlightenment, some inconsistencies, errors, omissions have come up, as it has been reflected in this section, so both the guidelines for and the practice of annotation can also be improved in future use scenarios of the clinical case corpus built and maintained by the shared task organizers.", "Our conclusion on this state of affairs is that some of the inconsistencies spotted in this section show that there were not a rational approach to the annotation of certain entities contained in the datasets (apart from other errors and/or oversights), and, hence, the upper bound of any tagging system is far below the ideal 1.0 F1-score. To this respect, in very many cases, the authors have made the wrong choice, but in others they were guided by analogy or common sense. Maybe a selection founded on probability measures estimated on training material could have obtained better results with this specific test dataset. However, in the end, this cannot be considered as an indication of a better system performance, since, as it has been shown, the test dataset used still needs more refinement work to be used as the right dataset for automatic annotation evaluation." ], [ "With this resource-based system developed for the PharmaCoNER shared task on ner of pharmacological, chemical and biomedical entities, we have demonstrated that, having a very limited knowledge of the domain, and, thus, making wrong choices many times in the creation of resources for the tasks at hand, but being more flexible with the matching mechanisms, a simple-design system can outperform a ner tagger for biomedical entities based on state-of-the-art artificial neural network technology. Thus, knowledge-based methods stand on their own merits in task resolution.", "But, perhaps most importantly, the other key point brought to light in this contribution is that a resource-based approach also favours a more critical stance on the dataset(s) used to evaluate system performance. With these methods, system development can go hand in hand with dataset refinement in a virtuous circle that let us think that maybe next time we are planning to add a new gazetteer or word embedding to our system in order to try to improve system performance, we should first look at our data and, like King Midas, turn our Human Annotated Dataset into a true Gold Standard Dataset." ], [ "We thank three anonymous reviewers of our manuscript for their careful reading and their many insightful comments and suggestions. We have made our best in providing a revised version of the manuscript that reflects their suggestions. Any remaining errors are our own responsability." ] ], "section_name": [ "Introduction", "Resource building", "Resource building ::: SNOMED CT", "Resource building ::: Contextual regexp grammars", "Development", "Development ::: Annotation process", "Results", "Discussion", "Discussion ::: Inconsistency in the ag", "Discussion ::: Inconsistency in ha as regards ag", "Discussion ::: Inconsistency in ha on the test set as regards t+d sets", "Discussion ::: Asystematic/incomplete annotation", "Discussion ::: Incorrect sctids", "Conclusions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "9b85f5604c00aea1554d310d1f44b13b82b684ae" ], "answer": [ { "evidence": [ "In this paper, in spite of previous statements, we present a system that uses rule-based and dictionary-based methods combined (in a way we prefer to call resource-based). Our final goals in the paper are two-fold: on the one hand, to describe our system, developed for the PharmaCoNER shared task, dealing with the annotation of some of the nes in health records (namely, pharmacological, chemical and biomedical entities) using a revisited version of rule- and dictionary-based approaches; and, on the other hand, to give pause for thought about the quality of datasets (and, thus, the fairness) with which systems of this type are evaluated, and to highlight the key role of resource-based systems in the validation and consolidation of both the annotation guidelines and the human annotation practices." ], "extractive_spans": [ "rule-based and dictionary-based methods " ], "free_form_answer": "", "highlighted_evidence": [ "In this paper, in spite of previous statements, we present a system that uses rule-based and dictionary-based methods combined (in a way we prefer to call resource-based). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "565f24840545e71dc8ede36f531e542009914c9a" ], "answer": [ { "evidence": [ "Although the competition proposes two different scenarios, in fact, both are guided by the snomed ct ontology —for subtask 1, entities must be identified with offsets and mapped to a predefined set of four classes (PROTEINAS, NORMALIZABLES, NO_NORMALIZABLES and UNCLEAR); for subtask 2, a list of all snomed ct ids (sctid) for entities occurring in the text must be given, which has been called concept indexing by the shared task organizers. Moreover, PharmaCoNER organizers decided to promote snomed ct substance ids over product, procedure or other possible interpretations also available in this medical ontology for a given entity. This selection must be done even if the context clearly refers to a different concept, according to the annotation guidelines (henceforth, AnnotGuide) and the praxis. Finally, PROTEINAS is ranked as the first choice for substances in this category." ], "extractive_spans": [], "free_form_answer": "Entity identification with offset mapping and concept indexing", "highlighted_evidence": [ "Although the competition proposes two different scenarios, in fact, both are guided by the snomed ct ontology —for subtask 1, entities must be identified with offsets and mapped to a predefined set of four classes (PROTEINAS, NORMALIZABLES, NO_NORMALIZABLES and UNCLEAR); for subtask 2, a list of all snomed ct ids (sctid) for entities occurring in the text must be given, which has been called concept indexing by the shared task organizers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "What does their system consist of?", "What are the two PharmaCoNER subtasks?" ], "question_id": [ "601f96770726a0063faf9bacd5db01c4af5add1f", "1c68d18b4b65c4d75dc199d2043079490f6310f8" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "spanish", "spanish" ], "topic_background": [ "", "" ] }
{ "caption": [ "Table 1: Results for PHARMACONER test dataset (both subtasks)" ], "file": [ "5-Table1-1.png" ] }
[ "What are the two PharmaCoNER subtasks?" ]
[ [ "1912.09152-Resource building ::: SNOMED CT-0" ] ]
[ "Entity identification with offset mapping and concept indexing" ]
591
2004.02451
An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models
We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as"barks"in"*The dogs barks". Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, using English data, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model's robustness on them, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.
{ "paragraphs": [ [ "intro", "Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent. However, it is still under debate whether such induced knowledge about grammar is robust enough to deal with syntactically challenging constructions such as long-distance subject-verb agreement. So far, the results for RNN language models (RNN-LMs) trained only with raw text are overall negative; prior work has reported low performance on the challenging test cases BIBREF0 even with the massive size of the data and model BIBREF1, or argue the necessity of an architectural change to track the syntactic structure explicitly BIBREF2, BIBREF3. Here the task is to evaluate whether a model assigns a higher likelihood on a grammatically correct sentence (UNKREF3) over an incorrect sentence (UNKREF5) that is minimally different from the original one BIBREF4.", "", ".5ex", "The author that the guards like laughs.", ".5ex", "The author that the guards like laugh.", "", "In this paper, to obtain a new insight into the syntactic abilities of neural LMs, in particular RNN-LMs, we perform a series of experiments under a different condition from the prior work. Specifically, we extensively analyze the performance of the models that are exposed to explicit negative examples. In this work, negative examples are the sentences or tokens that are grammatically incorrect, such as (UNKREF5) above.", "Since these negative examples provide a direct learning signal on the task at test time it may not be very surprising if the task performance goes up. We acknowledge this, and argue that our motivation for this setup is to deepen understanding, in particular the limitation or the capacity of the current architectures, which we expect can be reached with such strong supervision. Another motivation is engineering: we could exploit negative examples in different ways, and establishing a better way will be of practical importance toward building an LM or generator that can be robust on particular linguistic constructions.", "The first research question we pursue is about this latter point: what is a better method to utilize negative examples that help LMs to acquire robustness on the target syntactic constructions? Regarding this point, we find that adding additional token-level loss trying to guarantee a margin between log-probabilities for the correct and incorrect words (e.g., $\\log p(\\textrm {laughs} | h)$ and $\\log p(\\textrm {laugh} | h)$ for (UNKREF3)) is superior to the alternatives. On the test set of BIBREF0, we show that LSTM language models (LSTM-LMs) trained by this loss reach near perfect level on most syntactic constructions for which we create negative examples, with only a slight increase of perplexity about 1.0 point.", "Past work conceptually similar to us is BIBREF5, which, while not directly exploiting negative examples, trains an LM with additional explicit supervision signals to the evaluation task. They hypothesize that LSTMs do have enough capacity to acquire robust syntactic abilities but the learning signals given by the raw text are weak, and show that multi-task learning with a binary classification task to predict the upcoming verb form (singular or plural) helps models aware of the target syntax (subject-verb agreement). Our experiments basically confirm and strengthen this argument, with even stronger learning signals from negative examples, and we argue this allows to evaluate the true capacity of the current architectures. In our experiments (Section exp), we show that our margin loss achieves higher syntactic performance.", "Another relevant work on the capacity of LSTMs is BIBREF6, which shows that by distilling from syntactic LMs BIBREF7, LSTM-LMs can be robust on syntax. We show that our LMs with the margin loss outperforms theirs in most of the aspects, further strengthening the capacity of LSTMs, and also discuss the limitation.", "The latter part of this paper is a detailed analysis of the trained models and introduced losses. Our second question is about the true limitation of LSTM-LMs: are there still any syntactic constructions that the models cannot handle robustly even with our direct learning signals? This question can be seen as a fine-grained one raised by BIBREF5 with a stronger tool and improved evaluation metric. Among tested constructions, we find that syntactic agreement across an object relative clause (RC) is challenging. To inspect whether this is due to the architectural limitation, we train another LM on a dataset, on which we unnaturally augment sentences involving object RCs. Since it is known that object RCs are relatively rare compared to subject RCs BIBREF8, frequency may be the main reason for the lower performance. Interestingly, even when increasing the number of sentences with an object RC by eight times (more than twice of sentences with a subject RC), the accuracy does not reach the same level as agreement across a subject RC. This result suggests an inherent difficulty to track a syntactic state across an object RC for sequential neural architectures.", "We finally provide an ablation study to understand the encoded linguistic knowledge in the models learned with the help of our method. We experiment under reduced supervision at two different levels: (1) at a lexical level, by not giving negative examples on verbs that appear in the test set; (2) at a construction level, by not giving negative examples about a particular construction, e.g., verbs after a subject RC. We observe no huge score drops by both. This suggests that our learning signals at a lexical level (negative words) strengthen the abstract syntactic knowledge about the target constructions, and also that the models can generalize the knowledge acquired by negative examples to similar constructions for which negative examples are not explicitly given. The result also implies that negative examples do not have to be complete and can be noisy, which will be appealing from an engineering perspective." ], [ "The most common evaluation metric of an LM is perplexity. Although neural LMs achieve impressive perplexity BIBREF9, it is an average score across all tokens and does not inform the models' behaviors on linguistically challenging structures, which are rare in the corpus. This is the main motivation to separately evaluate the models' syntactic robustness by a different task." ], [ "task As introduced in Section intro, the task for a model is to assign a higher probability to the grammatical sentence over the ungrammatical one, given a pair of minimally different sentences at a critical position affecting the grammaticality. For example, (UNKREF3) and (UNKREF5) only differ at a final verb form, and to assign a higher probability to (UNKREF3), models need to be aware of the agreement dependency between author and laughs over an RC." ], [ "While initial work BIBREF4, BIBREF10 has collected test examples from naturally occurring sentences, this approach suffers from the coverage issue, as syntactically challenging examples are relatively rare. We use the test set compiled by BIBREF0, which consists of synthetic examples (in English) created by a fixed vocabulary and a grammar. This approach allows us to collect varieties of sentences with complex structures.", "The test set is divided by a necessary syntactic ability. Many are about different patterns of subject-verb agreement, including local (UNKREF8) and non-local ones across a prepositional phrase or a subject/object RC, and coordinated verb phrases (UNKREF9). (UNKREF1) is an example of agreement across an object RC.", "", "The senators smile/*smiles.", "The senators like to watch television shows and are/*is twenty three years old.", "Previous work has shown that non-local agreement is particularly challenging for sequential neural models BIBREF0.", "The other patterns are reflexive anaphora dependencies between a noun and a reflexive pronoun (UNKREF10), and on negative polarity items (NPIs), such as ever, which requires a preceding negation word (e.g., no and none) at an appropriate scope (UNKREF11):", "", "The authors hurt themselves/*himself.", "No/*Most authors have ever been popular.", "", "Note that NPI examples differ from the others in that the context determining the grammaticality of the target word (No/*Most) does not precede it. Rather, the grammaticality is determined by the following context. As we discuss in Section method, this property makes it difficult to apply training with negative examples for NPIs for most of the methods studied in this work.", "All examples above (UNKREF1–UNKREF11) are actual test sentences, and we can see that since they are synthetic some may sound somewhat unnatural. The main argument behind using this dataset is that even not very natural, they are still strictly grammatical, and an LM equipped with robust syntactic abilities should be able to handle them as human would do." ], [ "lm" ], [ "Following the practice, we train LMs on the dataset not directly relevant to the test set. Throughout the paper, we use an English Wikipedia corpus assembled by BIBREF10, which has been used as training data for the present task BIBREF0, BIBREF6, consisting of 80M/10M/10M tokens for training/dev/test sets. It is tokenized and rare words are replaced by a single unknown token, amounting to the vocabulary size of 50,000." ], [ "Since our focus in this paper is an additional loss exploiting negative examples (Section method), we fix the baseline LM throughout the experiments. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. Word embeddings are 400-dimensional, and input and output embeddings are tied BIBREF11. Deviating from some prior work BIBREF0, BIBREF1, we train LMs at sentence level as in sequence-to-sequence models BIBREF12. This setting has been employed in some previous work BIBREF3, BIBREF6.", "Parameters are optimized by SGD. For regularization, we apply dropout on word embeddings and outputs of every layer of LSTMs, with weight decay of 1.2e-6, and anneal the learning rate by 0.5 if the validation perplexity does not improve successively, checking every 5,000 mini-batches. Mini-batch size, dropout weight, and initial learning rate are tuned by perplexity on the dev set of Wikipedia dataset.", "The size of our three-layer LM is the same as the state-of-the-art LSTM-LM at document-level BIBREF9. BIBREF0's LSTM-LM is two-layer with 650 hidden units and word embeddings. Comparing two, since the word embeddings of our models are smaller (400 vs. 650) the total model sizes are comparable (40M for ours vs. 39M for theirs). Nonetheless, we will see in the first experiment that our carefully tuned three-layer model achieves much higher syntactic performance than their model (Section exp), being a stronger baseline to our extensions, which we introduce next." ], [ "method", "Now we describe four additional losses for exploiting negative examples. The first two are existing ones, proposed for a similar purpose or under a different motivation. As far as we know, the latter two have not appeared in past work.", "We note that we create negative examples by modifying the original Wikipedia training sentences. As a running example, let us consider the case where sentence (UNKREF19) exists in a mini-batch, from which we create a negative example (UNKREF21).", "", ".5ex", "An industrial park with several companies is located in the close vicinity.", ".5ex", "An industrial park with several companies are located in the close vicinity." ], [ "By a target word, we mean a word for which we create a negative example (e.g., is). We distinguish two types of negative examples: a negative token and a negative sentence; the former means a single incorrect word (e.g., are)." ], [ "This is proposed by BIBREF5 to complement a weak inductive bias in LSTM-LMs for learning syntax. It is multi-task learning across the cross-entropy loss ($L_{lm}$) and an additional loss ($L_{add}$):", "where $\\beta $ is a relative weight for $L_{add}$. Given outputs of LSTMs, a linear and binary softmax layers predict whether the next token is singular or plural. $L_{add}$ is a loss for this classification, only defined for the contexts preceding a target token $x_{i}$:", "where $x_{1:i} = x_1 \\cdots x_{i}$ is a prefix sequence and $\\mathbf {h^*}$ is a set of all prefixes ending with a target word (e.g., An industrial park with several companies is) in the training data. $\\textrm {num}(x) \\in \\lbrace \\textrm {singular, plural} \\rbrace $ is a function returning the number of $x$. In practice, for each mini-batch for $L_{lm}$, we calculate $L_{add}$ for the same set of sentences and add these two to obtain a total loss for updating parameters.", "As we mentioned in Section intro, this loss does not exploit negative examples explicitly; essentially a model is only informed of a key position (target word) that determines the grammaticality. This is rather an indirect learning signal, and we expect that it does not outperform the other approaches." ], [ "This is recently proposed BIBREF15 for resolving the repetition issue, a known problem for neural text generators BIBREF16. Aiming at learning a model that can suppress repetition, they introduce an unlikelihood loss, which is an additional loss at a token level and explicitly penalizes choosing words previously appeared in the current context.", "We customize their loss for negative tokens $x_i^*$ (e.g., are in (UNKREF21)). Since this loss is added at token-level, instead of Eq. () the total loss is $L_{lm}$, which we modify as:", "where $\\textrm {neg}_t(\\cdot )$ returns negative tokens for a target $x_i$. $\\alpha $ controls the weight. $\\mathbf {x}$ is a sentence in the training data $D$. The unlikelihood loss strengthens the signal to penalize undesirable words in a context by explicitly reducing the likelihood of negative tokens $x_i^*$. This is more direct learning signal than the binary classification loss." ], [ "We propose a different loss, in which the likelihoods for correct and incorrect sentences are more tightly coupled. As in the binary classification loss, the total loss is given by Eq. (). We consider the following loss for $L_{add}$:", "where $\\delta $ is a margin value between the log-likelihood of original sentence $\\mathbf {x}$ and negative sentences $\\lbrace \\mathbf {x}_j^* \\rbrace $. $\\textrm {neg}_s(\\cdot )$ returns a set of negative sentences by modifying the original one. Note that we change only one token for each $\\mathbf {x}_j^*$, and thus may obtain multiple negative sentences from one $\\mathbf {x}$ when it contains multiple target tokens (e.g., she leaves there but comes back ...).", "Comparing to the unlikelihood loss, not only decreasing the likelihood of a negative example, this loss tries to guarantee a minimal difference between the two likelihoods. The learning signal of this loss seems stronger in this sense; however, the token-level supervision is missing, which may provide a more direct signal to learn a clear contrast between correct and incorrect words. This is an empirical problem we pursue in the experiments." ], [ "Our final loss is a combination of the previous two, by replacing $g(x_i)$ in the unlikelihood loss by a margin loss:" ], [ "Each method employs a few additional hyperparameters. For the binary classification ($\\beta $) and unlikelihood ($\\alpha $) losses, we select their values from $\\lbrace 1,10,100,1000\\rbrace $ that achieve the best average syntactic performance (we find $\\alpha =1000, \\beta =1$). For the two margin losses, we fix $\\beta =1.0$ and $\\alpha =1.0$ and only see the effects of margin values." ], [ "scope Since our goal is to understand to what extent LMs can be sensitive to the target syntactic constructions by giving explicit supervision via negative examples, we only prepare negative examples on the constructions that are directly tested at evaluation. Specifically, we mark the following words in the training data, and create negative examples:", "", "To create negative examples on subject-verb agreement, we mark all present verbs and change their numbers.", "", "We also create negative examples on reflexive anaphora, by flipping between {themselves}$\\leftrightarrow ${himself, herself}.", "These two are both related to the syntactic number of a target word. For binary classification we regard both as a target word, apart from the original work that only deals with subject-verb agreement BIBREF5. We use a single common linear layer for both constructions.", "In this work, we do not create negative examples for NPIs. This is mainly for technical reasons. Among four losses, only the sentence-level margin loss can correctly handle negative examples for NPIs, essentially because other losses are token-level. For NPIs, left contexts do not have information to decide the grammaticality of the target token (a quantifier; no, most, etc.) (Section task). Instead, in this work, we use NPI test cases as a proxy to see possible negative (or positive) impacts as compensation for specially targeting some constructions. We will see that in particular for our margin losses, such negative effects are very small." ], [ "exp", "We first see the overall performance of baseline LMs as well as the effects of additional losses. Throughout the experiments, for each setting, we train five models from different random seeds and report the average score and standard deviation." ], [ "The main accuracy comparison across target constructions for different settings is presented in Table main. We first notice that our baseline LSTM-LMs (Section lm) perform much better than BIBREF0's LM. A similar observation is recently made by BIBREF6. This suggests that the original work underestimates the true syntactic ability induced by LSTM-LMs. The table also shows the results by their distilled LSTMs from RNNGs (Section intro)." ], [ "For the two types of margin loss, which margin value should we use? Figure margin reports average accuracies within the same types of constructions. For both token and sentence-levels, the task performance increases with $\\delta $, but a too large value (15) causes a negative effect, in particular on reflexive anaphora. There is an increase of perplexity by both methods. However, this effect is much smaller for the token-level loss. In the following experiments, we fix the margin value to 10 for both, which achieves the best syntactic performance." ], [ "We see a clear tendency that our token-level margin achieves overall better performance. Unlikelihood loss does not work unless we choose a huge weight parameter ($\\alpha =1000$), but it does not outperform ours, with a similar value of perplexity. The improvements by binary-classification loss are smaller, indicating that the signals are weaker than other methods with explicit negative examples. Sentence-level margin loss is conceptually advantageous in that it can deal with any types of negative examples defined in a sentence including NPIs. We see that it is often competitive with token-level margin loss, but we see relatively a large increase of perplexity (4.9 points). This increase is observed by even smaller values (Figure margin). Understanding the cause of this degradation as well as alleviating it is an important future direction." ], [ "orc In Table main, the accuracies on dependencies across an object RC are relatively low. The central question in this experiment is whether this low performance is due to the limitation of current architectures, or other factors such as frequency. We base our discussion on the contrast between object (UNKREF45) and subject (UNKREF46) RCs:", "", "The authors (that) the chef likes laugh.", "The authors that like the chef laugh.", "Importantly, the accuracies for a subject RC are more stable, reaching 99.8% with the token-level margin loss, although the content words used in the examples are common.", "It is known that object RCs are less frequent than subject RCs BIBREF8, BIBREF18, and it could be the case that the use of negative examples still does not fully alleviate this factor. Here, to understand the true limitation of the current LSTM architecture, we try to eliminate such other factors as much as possible under a controlled experiment." ], [ "We first inspect the frequencies of object and subject RCs in the training data, by parsing them with the state-of-the-art Berkeley neural parser BIBREF19. In total, while subject RCs occur 373,186 times, object RCs only occur 106,558 times. We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Section lm). To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences. Among the test cases about object RCs, we compare accuracies on subject-verb agreement, to make a comparison with subject RCs. We also evaluate on “animate only” subset, which has a correspondence to the test cases for subject RC with only differences in word order and inflection (like (UNKREF45) and (UNKREF46); see footnote FOOTREF47). Of particular interest to us is the accuracy on these animate cases. Since the vocabularies are exactly the same, we hypothesize that the accuracy will reach the same level as that on subject RCs with our augmentation." ], [ "However, for both all and animate cases, accuracies are below those for subject RCs (Figure orc). Although we see improvements from the original score (93.7), the highest average accuracy by the token-level margin loss on “animate” subset is 97.1 (“with that”), not beyond 99%. This result indicates some architectural limitation of LSTM-LMs in handling object RCs robustly at a near perfect level. Answering why the accuracy does not reach (almost) 100%, perhaps with other empirical properties or inductive biases BIBREF20, BIBREF21 is future work." ], [ "One distinguishing property of our margin loss, in particular token-level loss, is that it is highly lexical, making contrast explicitly between correct and incorrect words. This direct signal may make models acquire very specialized knowledge about each target word, not very generalizable one across similar words and occurring contexts. In this section, to get insights into the transferability of syntactic knowledge induced by our margin losses, we provide an ablation study by removing certain negative examples during training." ], [ "We perform two kinds of ablation. For token-level ablation (-Token), we avoid creating negative examples for all verbs that appear as a target verb in the test set. Another is construction-level (-Pattern), by removing all negative examples occurring in a particular syntactic pattern. We ablate a single construction at a time for -Pattern, from four non-local subject-verb dependencies (across a prepositional phrase (PP), subject RC, object RC, and long verb phrase (VP)). We hypothesize that models are less affected by token-level ablation, as knowledge transfer across words appearing in similar contexts is promoted by language modeling objective. We expect that construction-level supervision would be necessary to induce robust syntactic knowledge, as perhaps different phrases, e.g., a PP and a VP, are processed differently." ], [ "Figure ablation is the main results. Across models, we restrict the evaluation on four non-local dependency constructions, which we selected as ablation candidates as well. For a model with -Pattern, we evaluate only on examples of construction ablated in the training (see caption). To our surprise, both -Token and -Pattern have similar effects, except “Across an ORC”, on which the degradation by -Pattern is larger. This may be related to the inherent difficulty of object RCs for LSTM-LMs that we verified in Section orc. For such particularly challenging constructions, models may need explicit supervision signals. We observe lesser score degradation by ablating prepositional phrases and subject RCs. This suggests that, for example, the syntactic knowledge strengthened for prepositional phrases with negative examples could be exploited to learn the syntactic patterns about subject RCs, even when direct learning signals on subject RCs are missing.", "We see approximately 10.0 points score degradation on long VP coordination by both ablations. Does this mean that long VPs are particularly hard in terms of transferability? We find that the main reason for this drop, relative to other cases, are rather technical, essentially due to the target verbs used in the test cases. See Table vpcoordfirst,secondvp, which show that failed cases for the ablated models are often characterized by the existence of either like or likes. Excluding these cases (“other verbs” in Table secondvp), the accuracies reach 99.2 and 98.0 by -Token and -Pattern, respectively. These verbs do not appear in the test cases of other tested constructions. This result suggests that the transferability of syntactic knowledge to a particular word may depend on some characteristics of that word. We conjecture that the reason of weak transferability to likes and like is that they are polysemous; e.g., in the corpus, like is much more often used as a preposition and being used as a present tense verb is rare. This types of issues due to frequency may be one reason of lessening the transferability. In other words, like can be seen as a challenging verb to learn its usage only from the corpus, and our margin loss helps for such cases." ], [ "We have shown that by exploiting negative examples explicitly, the syntactic abilities of LSTM-LMs greatly improve, demonstrating a new capacity of handling syntax robustly. Given a success of our approach using negative examples, and our final analysis for transferability, which indicates that the negative examples do not have to be complete, one interesting future direction is to extend our approach to automatically inducing negative examples themselves in some way, possibly with orthographic and/or distributional indicators or others." ], [ "We would like to thank Naho Orita and the members of Computational Psycholinguistics Tokyo for their valuable suggestions and comments. This paper is based on results obtained from projects commissioned by the New Energy and Industrial Technology Development Organization (NEDO)." ] ], "section_name": [ "Introduction", "Target Task and Setup", "Target Task and Setup ::: Syntactic evaluation task", "Target Task and Setup ::: Syntactic evaluation task ::: @!START@BIBREF0@!END@ test set", "Target Task and Setup ::: Language models", "Target Task and Setup ::: Language models ::: Training data", "Target Task and Setup ::: Language models ::: Baseline LSTM-LM", "Learning with Negative Examples", "Learning with Negative Examples ::: Notations", "Learning with Negative Examples ::: Negative Example Losses ::: Binary-classification loss", "Learning with Negative Examples ::: Negative Example Losses ::: Unlikelihood loss", "Learning with Negative Examples ::: Negative Example Losses ::: Sentence-level margin loss", "Learning with Negative Examples ::: Negative Example Losses ::: Token-level margin loss", "Learning with Negative Examples ::: Parameters", "Learning with Negative Examples ::: Scope of Negative Examples", "Experiments on Additional Losses", "Experiments on Additional Losses ::: Naive LSTM-LMs perform well", "Experiments on Additional Losses ::: Higher margin value is effective", "Experiments on Additional Losses ::: Which additional loss works better?", "Limitations of LSTM-LMs", "Limitations of LSTM-LMs ::: Setup", "Limitations of LSTM-LMs ::: Results", "Do models generalize explicit supervision, or just memorize it?", "Do models generalize explicit supervision, or just memorize it? ::: Setup", "Do models generalize explicit supervision, or just memorize it? ::: Results", "Conclusion", "Acknowledges" ] }
{ "answers": [ { "annotation_id": [ "d2bdb156962cce873e49e9b76f7be8e78341198e" ], "answer": [ { "evidence": [ "Since our focus in this paper is an additional loss exploiting negative examples (Section method), we fix the baseline LM throughout the experiments. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. Word embeddings are 400-dimensional, and input and output embeddings are tied BIBREF11. Deviating from some prior work BIBREF0, BIBREF1, we train LMs at sentence level as in sequence-to-sequence models BIBREF12. This setting has been employed in some previous work BIBREF3, BIBREF6." ], "extractive_spans": [ "LSTM-LM " ], "free_form_answer": "", "highlighted_evidence": [ " Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "56ea7122b39e407edb60cb95f266ad75133fc643" ], "answer": [ { "evidence": [ "We first inspect the frequencies of object and subject RCs in the training data, by parsing them with the state-of-the-art Berkeley neural parser BIBREF19. In total, while subject RCs occur 373,186 times, object RCs only occur 106,558 times. We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Section lm). To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences. Among the test cases about object RCs, we compare accuracies on subject-verb agreement, to make a comparison with subject RCs. We also evaluate on “animate only” subset, which has a correspondence to the test cases for subject RC with only differences in word order and inflection (like (UNKREF45) and (UNKREF46); see footnote FOOTREF47). Of particular interest to us is the accuracy on these animate cases. Since the vocabularies are exactly the same, we hypothesize that the accuracy will reach the same level as that on subject RCs with our augmentation." ], "extractive_spans": [], "free_form_answer": "They randomly sample sentences from Wikipedia that contains an object RC and add them to training data", "highlighted_evidence": [ "We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Section lm). To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "b59fdfaaacbc3d184c85ac273411f11c4f7c4f42" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "What neural language models are explored?", "How do they perform data augmentation?", "What proportion of negative-examples do they use?" ], "question_id": [ "818c89b11471a6ca4f13c838713864fdf282c2ca", "7994b4001925798dfb381f9aa5c0545cdbd77220", "87159024d4b6dac8c456bb74a91044df292f6b99" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Comparison of syntactic dependency evaluation accuracies across different types of dependencies and perplexities. Numbers in parentheses are standard deviations. M&L18 is the result of two-layer LSTM-LM in Marvin and Linzen (2018). K19 is the result of distilled two-layer LSTM-LM from RNNGs (Kuncoro et al., 2019). VP: verb phrase; PP: prepositional phrase; SRC: subject relative clause; and ORC: object-relative clause. Margin values are set to 10, which works better according to Figure 1. Perplexity values are calculated on the test set of the Wikipedia dataset. The values of M&L18 and K19 are copied from Kuncoro et al. (2019).", "Figure 1: Margin value vs. macro average accuracy over the same type of constructions, or perplexity, with standard deviation for the sentence and token-level margin losses. δ = 0 is the baseline LSTM-LM without additional loss.", "Figure 2: Accuracies on “Across an ORC” (with and without complementizer “that”) by models trained on augmented data with additional sentences containing an object RC. Margin is set to 10. X-axis denotes the total number of object RCs in the training data. 0.37M roughly equals the number of subject RCs in the original data. “animate only” is a subset of examples (see body). Error bars are standard deviations across 5 different runs.", "Figure 3: An ablation study to see the performance of models trained with reduced explicit negative examples (token-level and construction-level). One color represents the same models across plots, except the last bar (construction-level), which is different for each plot.", "Table 2: Accuracies on long VP coordinations by the models with/without ablations. “All verbs” scores are overall accuracies. “like” scores are accuracies on examples on which the second verb (target verb) is like.", "Table 3: Further analysis of accuracies on the “other verbs” cases of Table 2. Among these cases, the second column (“likes”) shows accuracies on examples where the first verb (not target) is likes." ], "file": [ "6-Table1-1.png", "6-Figure1-1.png", "7-Figure2-1.png", "8-Figure3-1.png", "8-Table2-1.png", "8-Table3-1.png" ] }
[ "How do they perform data augmentation?" ]
[ [ "2004.02451-Limitations of LSTM-LMs ::: Setup-0" ] ]
[ "They randomly sample sentences from Wikipedia that contains an object RC and add them to training data" ]
592
1702.06777
Dialectometric analysis of language variation in Twitter
In the last few years, microblogging platforms such as Twitter have given rise to a deluge of textual data that can be used for the analysis of informal communication between millions of individuals. In this work, we propose an information-theoretic approach to geographic language variation using a corpus based on Twitter. We test our models with tens of concepts and their associated keywords detected in Spanish tweets geolocated in Spain. We employ dialectometric measures (cosine similarity and Jensen-Shannon divergence) to quantify the linguistic distance on the lexical level between cells created in a uniform grid over the map. This can be done for a single concept or in the general case taking into account an average of the considered variants. The latter permits an analysis of the dialects that naturally emerge from the data. Interestingly, our results reveal the existence of two dialect macrovarieties. The first group includes a region-specific speech spoken in small towns and rural areas whereas the second cluster encompasses cities that tend to use a more uniform variety. Since the results obtained with the two different metrics qualitatively agree, our work suggests that social media corpora can be efficiently used for dialectometric analyses.
{ "paragraphs": [ [ "Dialects are language varieties defined across space. These varieties can differ in distinct linguistic levels (phonetic, morphosyntactic, lexical), which determine a particular regional speech BIBREF0 . The extension and boundaries (always diffuse) of a dialect area are obtained from the variation of one or many features such as, e.g., the different word alternations for a given concept. Typically, the dialect forms plotted on a map appear as a geographical continuum that gradually connects places with slightly different diatopic characteristics. A dialectometric analysis aims at a computational approach to dialect distribution, providing quantitative linguistic distances between locations BIBREF1 , BIBREF2 , BIBREF3 .", "Dialectometric data is based upon a corpus that contains the linguistic information needed for the statistical analysis. The traditional approach is to generate these data from surveys and questionnaires that address variable types used by a few informants. Upon appropriate weighting, the distance metric can thus be mapped on an atlas. In the last few years, however, the impressive upswing of microblogging platforms has led to a scenario in which human communication features can be studied without the effort that traditional studies usually require. Platforms such as Twitter, Flickr, Instagram or Facebook bring us the possibility of investigating massive amounts of data in an automatic fashion. Furthermore, microblogging services provide us with real-time communication among users that, importantly, tend to employ an oral speech. Another difference with traditional approaches is that while the latter focus on male, rural informants, users of social platforms are likely to be young, urban people BIBREF4 , which opens the route to novel investigations on today's usage of language. Thanks to advances in geolocation, it is now possible to directly examine the diatopic properties of specific regions. Examples of computational linguistic works that investigate regional variation with Twitter or Facebook corpora thus far comprise English BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , Spanish BIBREF10 , BIBREF11 , BIBREF12 , German BIBREF13 , Arabic BIBREF14 and Dutch BIBREF15 . It is noticeable that many of these works combine big data techniques with probabilistic tools or machine learning strategies to unveil linguistic phenomena that are absent or hard to obtain from conventional methods (interviews, hand-crafted corpora, etc.).", "The subject of this paper is the language variation in a microblogging platform using dialectrometric measures. In contrast to previous works, here we precisely determine the linguistic distance between different places by means of two metrics. Our analysis shows that the results obtained with both metrics are compatible, which encourages future developments in the field. We illustrate our main findings with a careful analysis of the dialect division of Spanish. For definiteness, we restrict ourselves to Spain but the method can be straightforwardly applied to larger areas. We find that, due to language diversity, cities and main towns have similar linguistic distances unlike rural areas, which differ in their homogeneous forms. but obtained with a completely different method" ], [ "Our corpus consists of approximately 11 million geotagged tweets produced in Europe in Spanish language between October 2014 and June 2016. (Although we will focus on Spain, we will not consider in this work the speech of the Canary Islands due to difficulties with the data extraction). The classification of tweets is accomplished by applying the Compact Language Detector (CLD) BIBREF16 to our dataset. CLD exhibits accurate benchmarks and is thus good for our purposes, although a different detector might be used BIBREF17 . We have empirically checked that when CLD determines the language with a probability of at least 60% the results are extremely reliable. Therefore, we only take into account those tweets for which the probability of being written in Spanish is greater than INLINEFORM0 . Further, we remove unwanted characters, such as hashtags or at-mentions, using Twokenize BIBREF18 , a tokenizer designed for Twitter text in English, adapted to our goals.", "We present the spatial coordinates of all tweets in figure FIGREF1 (only the south-western part of Europe is shown for clarity). As expected, most of the tweets are localized in Spain, mainly around major cities and along main roads.", "Next, we select a word list from Varilex BIBREF19 , a lexical database that contains Spanish variation across the world. We consider 89 concepts expressed in different forms. Our selection eliminates possible semantic ambiguities. The complete list of keywords is included in the supplementary material below. For each concept, we determine the coordinates of the tweets in which the different keywords appear. From our corpus, we find that 219362 tweets include at least one form corresponding to any of the selected concepts.", "The pictorial representation of these concepts is made using a shapefile of both the Iberian Peninsula and the Balearic Islands. Then, we construct a polygon grid over the shapefile. The size of the cells ( INLINEFORM0 ) roughly corresponds to 1200 km INLINEFORM1 . We locate the cell in which a given keyword matches and assign a different color to each keyword. We follow a majority criterion, i.e., we depict the cell with the keyword color whose absolute frequency is maximum. This procedure nicely yields a useful geographical representation of how the different variants for a concept are distributed over the space." ], [ "The dialectometric differences are quantified between regions defined with the aid of our cells. For this purpose we take into account two metrics, which we now briefly discuss.", "This metric is a vector comparison measure. It is widely used in text classification, information retrieval and data mining BIBREF20 . Let INLINEFORM0 and INLINEFORM1 be two vectors whose components are given by the relative frequencies of the lexical variations for a concept within a cell. Quite generally, INLINEFORM2 and INLINEFORM3 represent points in a high-dimensional space. The similarity measure INLINEFORM4 between these two vectors is related to their inner product conveniently normalized to the product of their lengths, DISPLAYFORM0 ", "This expression has an easy interpretation. If both vectors lie parallel, the direction cosine is 1 and thus the distance becomes INLINEFORM0 . Since all vector components in our approach are positive, the upper bound of INLINEFORM1 is 1, which is attained when the two vectors are maximally dissimilar.", "This distance is a similarity measure between probability density functions BIBREF21 . It is a symmetrized version of a more general metric, the Kullback-Leibler divergence. Let INLINEFORM0 and INLINEFORM1 be two probability distributions. In our case, these functions are built from the relative frequencies of each concept variation. Our frequentist approach differs from previous dialectometric works, which prefer to measure distances using the Dice similarity coefficient or the Jaccard index BIBREF22 .", "The Kullback-Leibler divergence is defined as DISPLAYFORM0 ", "We now symmetrize this expression and take the square root, DISPLAYFORM0 ", "where INLINEFORM0 . The Jensen-Shannon distance INLINEFORM1 is indeed a metric, i.e., it satisfies the triangle inequality. Additionally, INLINEFORM2 fulfills the metric requirements of non-negativity, INLINEFORM3 if and only if INLINEFORM4 (identity of indiscernibles) and symmetry (by construction). This distance has been employed in bioinformatics and genome comparison BIBREF23 , BIBREF24 , social sciences BIBREF25 and machine learning BIBREF26 . To the best of our knowledge, it has not been used in studies of language variation. An exception is the work of Sanders sanders, where INLINEFORM5 is calculated for an analysis of syntactic variation of Swedish. Here, we propose to apply the Jensen-Shannon metric to lexical variation. Below, we demonstrate that this idea leads to quite promising results.", "Equations EQREF5 and EQREF8 give the distance between cells INLINEFORM0 and INLINEFORM1 for a certain concept. We assign the global linguistic distance in terms of lexical variability between two cells to the mean value DISPLAYFORM0 ", "where INLINEFORM0 is the distance between cells INLINEFORM1 and INLINEFORM2 for the INLINEFORM3 -th concept and INLINEFORM4 is the total number of concepts used to compute the distance. In the cosine similarity model, we replace INLINEFORM5 in equation EQREF10 with equation EQREF5 whereas in the Jensen-Shannon metric INLINEFORM6 is given by equation EQREF8 ." ], [ "We first check the quality of our corpus with a few selected concepts. Examples of their spatial distributions can be seen in figure FIGREF2 . The lexical variation depends on the particular concept and on the keyword frequency. We recall that the majority rule demands that we depict the cell with the color corresponding to the most popular word. Despite a few cells appearing to be blank, we have instances in most of the map. Importantly, our results agree with the distribution for the concept cold reported by Gonçalves and Sánchez BD with a different corpus. The north-south bipartition of the variation suggested in figure FIGREF2 (a) also agrees with more traditional studies BIBREF27 . As a consequence, these consistencies support the validity of our data. The novelty of our approach is to further analyze this dialect distribution with a quantitative measure as discussed below." ], [ "Let us quantify the lexical difference between regions using the concept cold as an illustration. First, we generate a symmetric matrix of linguistic distances INLINEFORM0 between pairs of cells INLINEFORM1 and INLINEFORM2 with INLINEFORM3 calculated using equation ( EQREF5 ) or equation ( EQREF8 ). Then, we find the maximum possible INLINEFORM4 value in the matrix ( INLINEFORM5 ) and select either its corresponding INLINEFORM6 or INLINEFORM7 index as the reference cell. Since both metrics are symmetric, the choice between INLINEFORM8 and INLINEFORM9 should not affect the results much (see below for a detailed analysis). Next, we normalize all values to INLINEFORM10 and plot the distances to the reference cell using a color scale within the range INLINEFORM11 , whose lowest and highest values are set for convenience due to the normalization procedure. The results are shown in figure FIGREF12 . Panel (a) [(b)] is obtained with the cosine similarity (Jensen-Shannon metric). Crucially, we observe that both metrics give similar results, which confirm the robustness of our dialectometric method.", "Clearly, cells with a low number of tweets will largely contribute to fluctuations in the maps. To avoid this noise-related effect, we impose in figure FIGREF13 a minimum threshold of 5 tweets in every cell. Obviously, the number of colored cells decreases but fluctuations become quenched at the same time. If the threshold is increased up to 10 tweets, we obtain the results plotted in figure FIGREF14 , where the north-south bipartition is now better seen. We stress that there exist minimal differences between the cosine similarity and the Jensen-Shannon metric models." ], [ "Our previous analysis assessed the lexical distance for a single concept (cold). Let us now take into account all concepts and calculate the averaged distances using equation ( EQREF10 ). To do so, we proceed as above and measure the distance from any of the two cells that presents the maximal value of INLINEFORM0 , where INLINEFORM1 is now calculated from equation EQREF10 . As aforementioned, INLINEFORM2 connects two cells, which denote as INLINEFORM3 and INLINEFORM4 . Any of these can be selected as the reference cell from which the remaining linguistic distances are plotted in the map. To ensure that we obtain the same results, we plot the distance distribution in both directions. The results with the cosine similarity model are shown in figure FIGREF16 . It is worth noting that qualitatively the overall picture is only slightly modified when the reference cell is changed from INLINEFORM5 [figure FIGREF16 (a)] to INLINEFORM6 [figure FIGREF16 (b)]. The same conclusion is reached when the distance is calculated with the Jensen-Shannon metric model, see figures FIGREF17 (a) and (b).", "After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now investigate which regions show similar geolectal variation, yielding well defined linguistic varieties. Those cells that have similar colors in either figure FIGREF16 or figure FIGREF17 are expected to be ascribed to the same dialect zone. Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas. In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc. The character of its vocabulary is more uniform as compared with the purple group. While the purple cluster prefer particular utterances, the lexicon of the urban group includes most of the keywords. Importantly, we emphasize that both distance measures (cosine similarity and Jensen-Shanon) give rise to the same result, with little discrepancies on the numerical values that are not significant. The presence of two Twitter superdialects (urban and rural) has been recently suggested BIBREF10 based on a machine learning approach. Here, we arrive at the same conclusion but with a totally distinct model and corpus. The advantage of our proposal is that it may serve as a useful tool for dialectometric purposes." ], [ "To sum up, we have presented a dialectrometric analysis of lexical variation in social media posts employing information-theoretic measures of language distances. We have considered a grid of cells in Spain and have calculated the linguistic distances in terms of dialects between the different regions. Using a Twitter corpus, we have found that the synchronic variation of Spanish can be grouped into two types of clusters. The first region shows more lexical items and is present in big cities. The second cluster corresponds to rural regions, i.e., mostly villages and less industrialized regions. Furthermore, we have checked that the different metrics used here lead to similar results in the analysis of the lexical variation for a representative concept and provide a reasonable description to language variation in Twitter.", "We remark that the small amount of tweets generated after matching the lexical variations of concepts within our automatic corpus puts a limit to the quantitative analysis, making the differences between regions small. Our work might be improved by similarly examining Spanish tweets worldwide, specially in Latin America and the United States. This approach should give more information on the lexical variation on the global scale and would help linguists in their dialectal classification work of micro- and macro-varieties. Our work hence represents a first step into the ambitious task of a thorough characterization of language variation using big data resources and information-theoretic methods." ], [ "We thank both F. Lamanna and Y. Kawasaki for useful discussions and the anonymous reviewers for nice suggestions. GD acknowledges support from the SURF@IFISC program." ], [ "Here we provide a list of our employed concepts and their lexical variants.", "0pt 0pt plus 1fil" ] ], "section_name": [ "Introduction", "Methods", "Language distance", "Results and discussion", "Single-concept case", "Global distance", "Conclusions", "Acknowledgments", "Supplementary material" ] }
{ "answers": [ { "annotation_id": [ "5748b2ccabb3dd76238f8f9c7a33bd6bd7ab96e3" ], "answer": [ { "evidence": [ "After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now investigate which regions show similar geolectal variation, yielding well defined linguistic varieties. Those cells that have similar colors in either figure FIGREF16 or figure FIGREF17 are expected to be ascribed to the same dialect zone. Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas. In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc. The character of its vocabulary is more uniform as compared with the purple group. While the purple cluster prefer particular utterances, the lexicon of the urban group includes most of the keywords. Importantly, we emphasize that both distance measures (cosine similarity and Jensen-Shanon) give rise to the same result, with little discrepancies on the numerical values that are not significant. The presence of two Twitter superdialects (urban and rural) has been recently suggested BIBREF10 based on a machine learning approach. Here, we arrive at the same conclusion but with a totally distinct model and corpus. The advantage of our proposal is that it may serve as a useful tool for dialectometric purposes." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas.", " In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "af16aadc1cd5520897ecf8f7cb0bf0a0d8349a66" ], "answer": [ { "evidence": [ "After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now investigate which regions show similar geolectal variation, yielding well defined linguistic varieties. Those cells that have similar colors in either figure FIGREF16 or figure FIGREF17 are expected to be ascribed to the same dialect zone. Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas. In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc. The character of its vocabulary is more uniform as compared with the purple group. While the purple cluster prefer particular utterances, the lexicon of the urban group includes most of the keywords. Importantly, we emphasize that both distance measures (cosine similarity and Jensen-Shanon) give rise to the same result, with little discrepancies on the numerical values that are not significant. The presence of two Twitter superdialects (urban and rural) has been recently suggested BIBREF10 based on a machine learning approach. Here, we arrive at the same conclusion but with a totally distinct model and corpus. The advantage of our proposal is that it may serve as a useful tool for dialectometric purposes." ], "extractive_spans": [], "free_form_answer": "Lexicon of the cities tend to use most forms of a particular concept", "highlighted_evidence": [ "After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now investigate which regions show similar geolectal variation, yielding well defined linguistic varieties. Those cells that have similar colors in either figure FIGREF16 or figure FIGREF17 are expected to be ascribed to the same dialect zone. Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas. In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc. The character of its vocabulary is more uniform as compared with the purple group. While the purple cluster prefer particular utterances, the lexicon of the urban group includes most of the keywords." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "b2714152a23a6a3150d358c50487953ab3138939" ], "answer": [ { "evidence": [ "After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now investigate which regions show similar geolectal variation, yielding well defined linguistic varieties. Those cells that have similar colors in either figure FIGREF16 or figure FIGREF17 are expected to be ascribed to the same dialect zone. Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas. In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc. The character of its vocabulary is more uniform as compared with the purple group. While the purple cluster prefer particular utterances, the lexicon of the urban group includes most of the keywords. Importantly, we emphasize that both distance measures (cosine similarity and Jensen-Shanon) give rise to the same result, with little discrepancies on the numerical values that are not significant. The presence of two Twitter superdialects (urban and rural) has been recently suggested BIBREF10 based on a machine learning approach. Here, we arrive at the same conclusion but with a totally distinct model and corpus. The advantage of our proposal is that it may serve as a useful tool for dialectometric purposes." ], "extractive_spans": [], "free_form_answer": "It uses particular forms of a concept rather than all of them uniformly", "highlighted_evidence": [ "After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now investigate which regions show similar geolectal variation, yielding well defined linguistic varieties. Those cells that have similar colors in either figure FIGREF16 or figure FIGREF17 are expected to be ascribed to the same dialect zone. Thus, we can distinguish two main regions or clusters in the maps. The purple background covers most of the map and represents rural regions with small, scattered population. Our analysis shows that this group of cells possesses more specific words in their lexicon. In contrast, the green and yellow cells form a second cluster that is largely concentrated on the center and along the coastline, which correspond to big cities and industrialized areas. In these cells, the use of standard Spanish language is widespread due probably to school education, media, travelers, etc. The character of its vocabulary is more uniform as compared with the purple group. While the purple cluster prefer particular utterances, the lexicon of the urban group includes most of the keywords. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do the authors mention any possible confounds in their study?", "What are the characteristics of the city dialect?", "What are the characteristics of the rural dialect?" ], "question_id": [ "0d755ff58a7e22eb4d02fca45d4a7a3920f4e725", "ff2bcf2d8ffee586751ce91cf15176301267b779", "55588ae77496e7753bff18763a21ca07d9f93240" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "twitter", "twitter", "twitter" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Heatmap of Spanish tweets geolocated in Europe. There exist 11208831 tweets arising from a language detection and tokenization procedure. We have zoomed in those arising in Spain, Portugal and the south of France.", "Figure 2: Spatial distribution of a few representative concepts based on the maximum absolute frequency criterion. Each concept has a lexical variation as indicated in the figure. The concepts are: (a) cold, (b) school, (c) streetlight, (d) fans.", "Figure 3: Linguistic distances for the concept cold using (a) cosine similarity and (b) Jensen-Shannon divergence metrics. The horizontal (vertical) axis is expressed in longitude (latitude) coordinates.", "Figure 4: Linguistic distances as in figure 3 but with a minimum threshold of 5 tweets in each cell using (a) cosine similarity and (b) Jensen-Shannon metric.", "Figure 5: Linguistic distances as in figure 3 but with a minimum threshold of 10 tweets in each cell using (a) cosine similarity and (b) Jensen-Shannon metric.", "Figure 6: Global distances averaged over all concepts. Here, we use the cosine similarity measure to calculate the distance. The color distribution displays a small variation from (a) to (b) due to the change of the reference cell.", "Figure 7: Global distances averaged over all concepts. Here, we use the Jensen-Shannon metric to calculate the distance. The color distribution displays a small variation from (a) to (b) due to the change of the reference cell." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Figure4-1.png", "5-Figure5-1.png", "6-Figure6-1.png", "6-Figure7-1.png" ] }
[ "What are the characteristics of the city dialect?", "What are the characteristics of the rural dialect?" ]
[ [ "1702.06777-Global distance-1" ], [ "1702.06777-Global distance-1" ] ]
[ "Lexicon of the cities tend to use most forms of a particular concept", "It uses particular forms of a concept rather than all of them uniformly" ]
593
1912.00582
BLiMP: A Benchmark of Linguistic Minimal Pairs for English
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP), a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars, and aggregate human agreement with the labels is 96.4%. We use it to evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs. We find that state-of-the-art models identify morphological contrasts reliably, but they struggle with semantic restrictions on the distribution of quantifiers and negative polarity items and subtle syntactic phenomena such as extraction islands.
{ "paragraphs": [ [ "Current neural networks for language understanding rely heavily on unsupervised pretraining tasks like language modeling. However, it is still an open question what degree of knowledge state-of-the-art language models (LMs) acquire about different linguistic phenomena. Many recent studies BIBREF0, BIBREF1, BIBREF2 have advanced our understanding in this area by evaluating LMs' preferences between minimal pairs of sentences, as in Example SECREF1. However, these studies have used different analysis metrics and focused on a small set of linguistic paradigms, making a big-picture comparison between these studies limited.", ". Ṫhe cat annoys Tim. (grammatical) The cat annoy Tim. (ungrammatical)", "We introduce the Benchmark of Linguistic Minimal Pairs (shortened to BLiMP or just *X ) a linguistically-motivated benchmark for assessing LMs' knowledge across a wide variety of English phenomena, encapsulating both previously studied and novel contrasts. *X consists of 67 datasets automatically generated from expert-crafted grammars, each containing 1000 minimal pairs and organized by phenomenon into 12 categories. Validation with crowd workers shows that humans overwhelmingly agree with the contrasts in *X .", "We use *X to study several pretrained LMs: Transformer-based LMs GPT-2 BIBREF3 and Transformer-XL BIBREF4, an LSTM LM trained by BIBREF5, and a $n$-gram LM. We evaluate whether the LM assigns a higher probability to the acceptable sentence in each minimal pair in *X . This experiment gives a sense of which grammatical distinctions LMs are sensitive to in general, and the extent to which unrelated models have similar strengths and weaknesses. We conclude that current neural LMs robustly learn agreement phenomena and even some subtle syntactic phenomena such as ellipsis and control/raising. They perform comparatively worse (and well below human level) on minimal pairs related to argument structure and the licensing of negative polarity items and quantifiers. All models perform at or near chance on extraction islands, which we conclude is the most challenging phenomenon covered by *X . Overall, we note that all models we evaluate fall short of human performance by a wide margin. GPT-2, which performs the best, does match (even just barely exceeds) human performance on some grammatical phenomena, but remains 8 percentage points below human performance overall.", "We conduct additional experiments to investigate the effect of training size on LSTM model performance on *X . We show that learning trajectories differ, sometimes drastically, across different paradigms in the dataset, with phenomena such as anaphor agreement showing consistent improvement as training size increases, and other phenomena such as NPIs and extraction islands remaining near chance despite increases in training size. We also compare overall sentence probability to two other built-in metrics coded on *X and find that the chosen metric changes how we evaluate relative model performance." ], [ "The objective of a language model is to give a probability distribution over the possible strings of a language. Language models can be built on neural network models or non-neural network models. Due to their unsupervised nature, they can be trained without external annotations. More recently, neural network based language modeling has been shown to be a strong pretraining task for natural language understanding tasks BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some recent models, such as BERT BIBREF9 use closely related tasks such as masked language modeling.", "In the last decade, we have seen two major paradigm shifts in the state of the art for language modeling. The first major shift for language modeling was the movement from statistical methods based on $n$-grams BIBREF10 to neural methods such as LSTMs BIBREF11, which directly optimize on the task of predicting the next word. More recently, Transformer-based architectures employing self-attention BIBREF12 have outperformed LSTMs at language modeling BIBREF4. Although it is reasonably clear that these shifts have resulted in stronger language models, the primary metric of performance is perplexity, which cannot give detailed insight into these models' linguistic knowledge. Evaluation on downstream task benchmarks BIBREF13, BIBREF14 is more informative, but might not present a broad enough challenge or represent grammatical distinctions at a sufficiently fine-grained level." ], [ "A large number of recent studies has used acceptability judgments to reveal what neural networks know about grammar. One branch of this literature has focused on using minimal pairs to infer whether LMs learn about specific linguistic phenomena. Table TABREF4 gives a summary of work that has studied linguistic phenomena in this way. For instance, linzen2016assessing look closely at minimal pairs contrasting subject-verb agreement. marvin2018targeted look at a larger set of phenomena, including negative polarity item licensing and reflexive licensing. However, a relatively small set of phenomena is covered by these studies, to the exclusion of well-studied phenomena in linguistics such as control and raising, ellipsis, distributional restrictions on quantifiers, and countless others. This is likely due to the labor-intensive nature of collecting examples that exhibit informative grammatical phenomena and their acceptability judgments.", "A related line of work evaluates neural networks on acceptability judgments in a more general domain of grammatical phenomena. Corpora of sentences and their grammaticality are collected for this purpose in a number of computational studies on grammaticality judgment BIBREF26, BIBREF27, BIBREF16. The most recent and comprehensive corpus is CoLA BIBREF16, which contains around 10k sentences covering a wide variety of linguistic phenomena from 23 linguistic papers and textbooks. CoLA, which is included in the GLUE benchmark BIBREF13, has been used to track advances in the general grammatical knowledge of reusable sentence understanding models. Current models like BERT BIBREF9 and T5 BIBREF28 can be trained to give acceptability judgments that approach or even exceed individual human agreement with CoLA.", "While CoLA can also be used to evaluate phenomenon-specific knowledge of models, this method is limited by the need to train a supervised classifier on CoLA data prior to evaluation. BIBREF29 compare the CoLA performance of pretrained sentence understanding models: an LSTM, GPT BIBREF8, and BERT. They find that these models have good performance on sentences involving marked argument structure, and struggle on sentences with long-distance dependencies like those found in questions, though the Transformers have a noticeable advantage. However, evaluating supervised classifiers prevents making strong conclusions about the models themselves, since biases in the training data may affect the results. For instance, relatively strong performance on a phenomenon might be due to a model's implicit knowledge or to frequent occurrence of similar examples in the training data. Evaluating LMs on minimal pairs evades this problem by eschewing supervised training on acceptability judgments. It is possible to use the LM probability of a sentence as a proxy for acceptability because other factors impacting a sentence's probability such as length and lexical content are controlled for." ], [ "The *X dataset consists of 67 paradigms of 1000 sentence pairs. Each paradigm is annotated for the unique contrast it isolates and the broader category of phenomena it is part of. The data is automatically generated according to expert-crafted grammars, and our automatic labels are validated with crowd-sourced human judgments." ], [ "To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available.", ". DP1 V1 refl_match .", "The cats licked themselves .", ". DP1 V1 refl_mismatch .", "The cats licked itself .", "This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast." ], [ "The paradigms that are covered by *X represent well-established contrasts in English morphology, syntax, and semantics. Each paradigm is grouped into one of 12 phenomena, shown in Table TABREF1. The paradigms are selected with the constraint that they can be illustrated with minimal pairs of equal sentence length and that it is of a form that could be written as a template, like in SECREF6 and SECREF6. While this dataset has broad coverage, it is not exhaustive – it is not possible to include every grammatical phenomenon of English, and there is no agreed-upon set of core phenomena. However, we consider frequent inclusion of a phenomenon in a syntax/semantics textbook as an informal proxy for what linguists consider to be core phenomena. We survey several syntax textbooks BIBREF31, BIBREF32, BIBREF33, and find that nearly all of the phenomena in *X are discussed in some source, and most of the topics that repeatedly appear in textbooks and can be represented with minimal pairs (e.g. agreement, argument selection, control/raising, wh-extraction/islands, binding) are present in *X . Because the generation code is reusable, it is possible to generate paradigms not included in *X in the future." ], [ "With over 3000 words, *X has by far the widest lexical variability of any related generated dataset. The vocabulary includes verbs with 11 different subcategorization frames, including verbs that select for PPs, infinitival VPs, and embedded clauses. By comparison, datasets by BIBREF30 and BIBREF1 each use a vocabulary of well under 200 items. Other datasets of minimal pairs that achieve greater lexical and syntactic variety use data-creation methods that are limited in terms of empirical scope or control. BIBREF0 construct a dataset of minimal pairs for subject-verb agreement by changing the number marking on present-tense verbs in a subset of English Wikipedia. However this approach does not generalize beyond simple agreement phenomena. BIBREF27 build a dataset of minimal pairs by taking sentences from the BNC through round-trip machine translation. The resulting sentences contain a wider variety of grammatical violations, but it is not possible to control the nature of the violation and a single sentence may contain several violations." ], [ "To verify that the generated sentences represent a real contrast in acceptability, we conduct human validation via Amazon Mechanical Turk. Twenty separate validators rated five pairs from each of the 67 paradigms, for a total of 6700 judgments. We restricted validators to individuals currently located in the US who self-reported as native speakers of English. To assure that our validators made a genuine effort on the task, each HIT included an attention check item and a hidden field question to catch bot-assisted humans. For each minimal pair, 20 different individuals completed a forced-choice task that mirrors the task done by the LMs; the human-determined “acceptable” sentence was calculated via majority vote of annotators. By this metric, we estimate aggregate human agreement with our annotations to be 96.4% overall. As a threshold of inclusion in *X , the majority of validators needed to agree with *X on at least 4/5 examples from each paradigm. Thus, all 67 paradigms in the public version of *X passed this validation, and only two additional paradigms had to be rejected on this criterion. We also estimate individual human agreement to be 88.6% overall using the approximately 100 annotations from each paradigm. Figure TABREF14 reports these individual human results (alongside model results) as a conservative measure of human agreement.", "white" ], [ "GPT-2 BIBREF3 is a large-scale language model using the Transformer architecture BIBREF12. We use the large version of GPT-2, which contains 24 layers and 345M parameters. The model is pretrained on BIBREF3's custom-built WebText dataset, which contains 40GB of text extracted from web pages and filtered by humans. To our best knowledge, the WebText corpus is not publicly available. Assuming approximately 5-6 bytes/chars per word on average, we estimate WebText contains approximately 8B tokens. The testing code for GPT-2 has been integrated into jiant, a codebase for training and evaluating sentence understanding models BIBREF34." ], [ "Transformer-XL BIBREF4 is another multi-layer Transformer-based neural language model. We test a pretrained Transformer-XL model with 18 layers of Transformer decoders and 16 attention heads for each layer. The model is trained on WikiText-103 BIBREF35, a corpus of 103M tokens from high-quality Wikipedia articles. Code for testing Transformer-XL on *X is also implemented in jiant." ], [ "We include a long-short term memory (LSTM, BIBREF36) language model in our experiments. Specifically, we test a pretrained LSTM language model from BIBREF5 on *X . The model is trained on a 90M token corpus extracted from English Wikipedia. For investigating the effect of training size on models' *X performance, We retrain a series of LSTM models with the same hyperparameters and the following training sizes: 64M, 32M, 16M, 8M, 4M, 2M, 1M, 1/2M, 1/4M, and 1/8M tokens. For each size, we train the model on five different random samples drawing from the original training data, which has a size of 83M tokens. We release our LSTM evaluation code." ], [ "We build a 5-gram LM on the English Gigaword corpus BIBREF37, which consists of 3.07B tokens. To efficiently query $n$-grams we use an implementation based on BIBREF38, which is shown to speed up estimation BIBREF39. We release our $n$-gram evaluation code." ], [ "We mainly evaluate the models by measuring whether the LM assigns a higher probability to the grammatical sentence within the minimal pair. This method, used by BIBREF1, is only meaningful for comparing sentences of similar length and lexical content, as overall sentence probability tends to decrease as sentence length increases or word frequencies decrease BIBREF27. However, as discussed in Section SECREF3 we design every paradigm in *X to be compatible with this method." ], [ "We report the 12-category accuracy results for all models and human evaluation in Table TABREF14." ], [ "An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all paradigms. GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other. All models perform well below estimated human agreement (as described in Section SECREF11). The $n$-gram model's poor overall performance confirms *X is not solvable from co-occurrence information alone. Rather, success at *X is driven by the more abstract features learned by neural networks. There are no categories in which the $n$-gram approaches human performance.", "Because we evaluate pretrained models that differ in architecture and training data quantity/domain, we can only speculate about what drives these differences (though see Section SECREF37 for a controlled ablation study on the LSTM LM). Nonetheless, the results seem to indicate that access to training data is the main driver of performance on *X for the neural models we evaluate. On purely architectural grounds, the similar performance of Transformer-XL and the LSTM is surprising since Transformer-XL is the state of the art on several LM training sets. However, they are both trained 100$\\pm 10$M tokens of Wikipedia text. Relatedly, GPT-2's advantage may come from the fact that it is trained on roughly two orders of magnitude more data. While it is unclear whether LSTMs trained on larger datasets could rival GPT-2, such experiments are impractical due to the difficulty of scaling LSTMs to this size." ], [ "The results also reveal considerable variation in performance across grammatical phenomena. Models generally perform best and closest to human level on morphological phenomena. This includes anaphor agreement, determiner-noun agreement, and subject-verb agreement. In each of these domains, GPT-2's performance is within 2.1 percentage points of humans. The set of challenging phenomena is more diverse. Islands are the hardest phenomenon by a wide margin. Only GPT-2 performs noticeably above chance, but it remains 20 points below humans. Some semantic phenomena, specifically those involving NPIs and quantifiers, are also challenging overall. All models show relatively weak performance on argument structure.", "From results we conclude that current SotA LMs have robust knowledge of basic facts of English agreement. This does not mean that LMs will come close to human performance for all agreement phenomena. Section SECREF32 discusses evidence that increased dependency length and the presence of agreement attractors of the kind investigated by BIBREF0 and BIBREF5 reduce performance on agreement phenomena.", "The exceptionally poor performance on islands is hard to reconcile with BIBREF2's (BIBREF2) conclusion that LSTMs have knowledge of some island constraints. In part, this difference may come down to differences in metrics. BIBREF2 compare a set of four related sentences with gaps in the same position or no gaps to obtain the wh-licensing interaction as a metric of how strongly the LM identifies a filler-gap dependency in a single syntactic position. They consider an island constraint to have been learned if this value is close to zero. We instead compare LM probabilities of sentences with similar lexical content but with gaps in different syntactic positions. These metrics target different forms of grammatical knowledge, though both are desirable properties to find in an LM. We also note that the LMs we test do not have poor knowledge of filler-gap dependencies in general, with all neural models perform above well above chance. This suggests that, while these models are able to establish long-distance dependencies in general, they are comparatively worse at identifying the syntactic domains in which these dependencies are blocked.", "The semantic phenomena that models struggle with are usually attributed in current theories to a presupposition failure or contradiction arising from semantic composition or pragmatic reasoning BIBREF40, BIBREF41, BIBREF42. These abstract semantic and pragmatic factors may be difficult for LMs to learn. BIBREF1 also find that LSTMs largely fail to recognize NPI licensing conditions. BIBREF20 find that BERT (which is similar in scale to GPT-2) recognizes these conditions inconsistently in an unuspervised setting.", "The weak performance on argument structure is somewhat surprising, since arguments are usually (though by no means always) local to their heads. Argument structure is closely related to semantic event structure BIBREF43, which may be comparatively difficult for LMs to learn. This finding contradicts BIBREF29's (BIBREF29) conclusion that argument structure is one of the strongest domains for neural models. However, BIBREF29 study supervised models trained on CoLA, which includes a large proportion of sentences related to argument structure." ], [ "We also examine to what extent the models' performances are similar to each other, and how they are similar to human evaluation in terms of which phenomena are comparatively difficult. Figure TABREF29 shows the Pearson correlation between the four LMs and human evaluation on their accuracies in 67 paradigms. Compared to humans, GPT-2 has the highest correlation, closely followed by Transformer-XL and LSTM, though the correlation is only moderate. The $n$-gram's performance correlates with humans relatively weakly. Transformer-XL and LSTM are very highly correlated at 0.9, possibly reflecting their similar training data. Also, neural models correlate with each other more strongly than with humans or the $n$-gram model, suggesting neural networks share some biases that are not entirely human-like.", "white" ], [ "We also ask what factors aside from linguistic phenomena make a minimal pair harder or easier for an LM to distinguish. We test whether shallow features like sentence length or overall sentence likelihood are predictors of whether the LM will have the right preference. The results are shown in Figure FIGREF31. While sentence length, perplexity and the probability of the good sentence all seem to predict model performance to a certain extent, the predictive power is not strong, especially for GPT-2, which is much less influenced by greater perplexity of the good sentence than the other models." ], [ "The presence of intervening material that lengthens an agreement dependency lowers accuracy on that sentence in both humans and LMs. We study how the presence or absence of this intervening material affects the ability of LMs to detect mismatches in agreement in *X . First, we test for knowledge of determiner-noun agreement with and without an intervening adjective, as in Example SECREF32. The results are plotted in Figure FIGREF33. The $n$-gram model is the most heavily impacted, performing on average 35 points worse. This is unsurprising, since the bigram consisting of a determiner and noun is far more likely to be observed than the trigram of determiner, adjective, and noun. For the neural models, we find a weak but consistent effect, with all models performing on average between 5 and 3 points worse when there is an intervening adjective.", ". Ṙon saw that man/*men. Ron saw that nice man/*men.", "Second, we test for sensitivity to mismatches in subject-verb agreement when an “attractor” noun of the opposite number intervenes. We compare attractors in relative clauses and as part of a relational noun as in Example SECREF32, following experiments by BIBREF0 and others. Again, we find an extremely large effect for the $n$-gram model, which performs over 50 points worse and well below chance when there is an attractor present, showing that the $n$-gram model is consistently misled by the presence of the attractor. All of the neural models perform above chance with an attractor present, but GPT-2 and the LSTM perform 22 and 20 points worse when an attractor is present. Transformer-XL's performance is harmed by only 5 points. Note that GPT-2 still has the highest performance in both cases, and even outperforms humans in the relational noun case. Thus, we reproduce BIBREF0's finding that attractors significantly reduce LSTM LMs' sensitivity to mismatches in agreement and find evidence that this holds true of Transformer LMs as well.", ". Ṫhe sisters bake/*bakes. The sisters who met Cheryl bake/*bakes. The sisters of Cheryl bake/*bakes." ], [ "In the determiner-noun agreement and subject-verb agreement categories, we generate separate datasets for nouns with regular and irregular number marking, as in Example SECREF34. All else being equal, only models with access to sub-word-level information should make any distinction between regular and irregular morphology.", ". Ṙon saw that nice kid/*kids. (regular) Ron saw that nice man/*men. (irregular)", "Contrary to this prediction, the results in Figure FIGREF36 show that the sub-word-level models GPT-2 and Transformer-XL show little effect of irregular morphology: they perform less than $0.013$ worse on irregulars than regulars. Given their high performance overall, this suggests they robustly encode number features without relying on segmental cues." ], [ "We also use *X to track how a model's knowledge of particular phenomena varies with the quantity of training data. We test this with the LSTM model and find that different phenomena in *X have notably different learning curves across different training sizes, as shown in Figure FIGREF39. Crucially, phenomena with similar results from the LSTM model trained on the full 83M tokens of training data may have very different learning curves. For example, the LSTM model performs well on both irregular forms and anaphor agreement, but the different learning curves suggest that more training data is required in the anaphor agreement case to achieve this same performance level. This is supported by a regression analysis showing that the best-fit line for anaphor agreement has the steepest slope (0.0623), followed by Determiner-Noun agreement (0.0426), Subject-Verb agreement (0.041), Irregular (0.039) and Ellipsis (0.0389). By contrast, Binding (0.016), Argument Structure (0.015), and Filler-Gap Dependency (0.0095) have shallower learning curves, showing a less strong effect of increases in training data size. The phenomena that showed the lowest performance overall, NPIs and Islands, also show the lowest effects of increases to training size, with slopes of 0.0078 and 0.0036, respectively. This indicates that, even given a substantially larger amount training data, the LSTM is unlikely to achieve human-like performance on these phenomena – it simply fails to learn the necessary dependencies. It should be noted that these differences in learning curves show how *X performance dissociates from perplexity, the standard measure of LM performance: while perplexity keeps decreasing as training size increases, the performance in different *X phenomena show very different learning curves." ], [ "There are several other techniques one can use to measure an LM's “preference” between two minimally different sentences. So far, we have considered only the full-sentence method, advocated for by BIBREF1, which compares the LM likelihood of the full sentences. In a followup experiment, we use two “prefix methods”, each of which has appeared in prior work in this area, that evaluate the model's preferences by comparing its prediction at a key point of divergence between the two sentences. Subsets of *X data—from the binding, determiner-noun agreement, and subject-verb agreement categories—are designed to be compatible with multiple methods, allowing us to conduct the first direct comparison. We find that all methods give broadly similar results when aggregating over a large set of paradigms, but some results diverge sharply for specific paradigms." ], [ "In the one-prefix method, used by BIBREF0, a pair of sentences share the same initial portion of a sentence, but differ in a critical word that make them differ in grammaticality (e.g., The cat eats mice vs. The cat eat mice). The model's prediction is correct if it assigns a higher probability to the grammatical token given the shared prefix." ], [ "In the two-prefix method, used by BIBREF19, a pair of sentences have a different initial portion that diverge in some critical way, but the grammaticality difference is only revealed when a shared critical word is included (e.g., The cat eats mice vs. The cats eats mice). For these paradigms, we evaluate whether the model assigns a higher probability to the critical word conditioned on the grammatical prefix compared the ungrammatical prefix. Note that the same pair of sentences cannot be compatible with both prefix methods, and that a pair may be compatible with the full-sentence method but neither prefix method.", "For both prefix methods, it is crucial that the grammaticality of the sentence is unambiguously predictable from the critical word, but not sooner. With simple LM probabilities, the probabilities of the rest of the word tokens in the sentence also affect the performance. For example, a model may predict that `The cat ate the mouse' is more likely than `The cat eaten the mouse' without correctly predicting that $P(\\emph {ate}|\\emph {the cat}) > P(\\emph {eaten}|\\emph {the cat})$ if it predicts that $P(\\emph {the mouse}|\\emph {the cat ate})$ is much greater than $P(\\emph {the mouse}|\\emph {the cat eaten})$. Furthermore, it is unclear how a model assigns probabilities conditioned on an ungrammatical prefix, since ungrammatical sentences are largely absent from the training data. Using prefix probabilities allow us to exclude models' use of this additional information and evaluate how the models perform when they have just enough information to judge grammaticality." ], [ "The results in Figure FIGREF42 show that models have generally comparable accuracies overall in prefix methods and the simple whole-sentence LM method. However, A deeper examination of the differences between these methods in each paradigm reveals some cases where a models' performance fluctuates more between these methods. For example, Transformer-XL performs much worse at binding, determiner-noun agreement, and subject-verb agreement in the simple LM method, suggesting that the probabilities Transformer-XL assigns to the irrelevant part at the end of the sentence very often overturn the `judgment' based on probability up to the critical word. On the other hand, GPT-2 benefits from reading the whole sentence for binding phenomena, as its performance is better in the simple LM method than in the prefix method. Overall, we observe that Transformer-XL and GPT-2 are more affected by evaluation methods than LSTM and $n$-gram when we compare the simple LM method and the two-prefix method." ], [ "We have shown ways in which *X can be used as tool to gain both high-level and fine-grained insight into the grammatical knowledge of language models. Like the GLUE benchmark BIBREF13, *X assigns a single overall score to an LM which summarizes its general sensitivity to minimal pair contrasts. Thus, it can function as a linguistically motivated benchmark for the general evaluation of new language models. *X also provides a breakdown of LM performance by linguistic phenomenon, which can be used to draw concrete conclusions about the kinds of grammatical knowledge acquired by a given model. This kind of information is useful for detailed comparisons across models, as well as in ablation studies.", "One question we leave unexplored is how well supervised acceptability classifiers built on top of pretrained models like BERT BIBREF9 perform on *X . It would be possible to evaluate how well such classifiers generalize to unseen phenomena by training on a subset of paradigms in *X and evaluating on the held-out sets, giving an idea of to what extent models are able to transfer knowledge in one domain to a similar one. BIBREF20 find that this method is potentially more revealing of implicit grammatical knowledge than purely unsupervised methods.", "An important goal of linguistically-informed analysis of LMs is to better understand those empirical domains where current LMs appear to acquire some relevant knowledge, but still fall short of human performance. The results from *X suggest that—in addition to relatively well-studied phenomena like filler-gap dependencies, NPIs, and binding—argument structure remains one area where there is much to uncover about what LMs learn. More generally, as language modeling techniques continue to improve, it will be useful to have large-scale tools like *X to efficiently track changes in what these models do and do not know about grammar." ], [ "This material is based upon work supported by the National Science Foundation under Grant No. 1850208. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This project has also benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU)." ] ], "section_name": [ "Introduction", "Background & Related Work ::: Language Models", "Background & Related Work ::: Evaluating Linguistic Knowledge", "Data", "Data ::: Data generation procedure", "Data ::: Coverage", "Data ::: Comparison to Related Resources", "Data ::: Data validation", "Models & Methods ::: Models ::: GPT-2", "Models & Methods ::: Models ::: Transformer-XL", "Models & Methods ::: Models ::: LSTM", "Models & Methods ::: Models ::: 5-gram", "Models & Methods ::: Evaluation", "Results", "Results ::: Overall Results", "Results ::: Phenomenon-Specific Results", "Results ::: Correlation of Model & Human Performance", "Results ::: Shallow Predictors of Performance", "Additional Experiments ::: Long-Distance Dependencies", "Additional Experiments ::: Regular vs. Irregular Agreement", "Additional Experiments ::: Training size and *X performance", "Additional Experiments ::: Alternate Evaluation Methods", "Additional Experiments ::: Alternate Evaluation Methods ::: One-prefix method", "Additional Experiments ::: Alternate Evaluation Methods ::: Two-prefix method", "Additional Experiments ::: Alternate Evaluation Methods ::: Results", "Discussion & Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "adb326cffa46d1988d6b73a7a8a5106163f9f460" ], "answer": [ { "evidence": [ "An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all paradigms. GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other. All models perform well below estimated human agreement (as described in Section SECREF11). The $n$-gram model's poor overall performance confirms *X is not solvable from co-occurrence information alone. Rather, success at *X is driven by the more abstract features learned by neural networks. There are no categories in which the $n$-gram approaches human performance." ], "extractive_spans": [ "GPT-2" ], "free_form_answer": "", "highlighted_evidence": [ "GPT-2 achieves the highest score and the $n$-gram the lowest." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "576e7250faa7a7429ad6502248f3f49ad2527a4c" ], "answer": [ { "evidence": [ "An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all paradigms. GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other. All models perform well below estimated human agreement (as described in Section SECREF11). The $n$-gram model's poor overall performance confirms *X is not solvable from co-occurrence information alone. Rather, success at *X is driven by the more abstract features learned by neural networks. There are no categories in which the $n$-gram approaches human performance.", "We report the 12-category accuracy results for all models and human evaluation in Table TABREF14." ], "extractive_spans": [], "free_form_answer": "Overall accuracy per model is: 5-gram (60.5), LSTM (68.9), TXL (68.7), GPT-2 (80.1)", "highlighted_evidence": [ "GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other.", "We report the 12-category accuracy results for all models and human evaluation in Table TABREF14." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "b71e499af8d8fc0e13f131bfeb770d4e0582b8b5" ], "answer": [ { "evidence": [ "To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available.", "This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast." ], "extractive_spans": [ " The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences." ], "free_form_answer": "", "highlighted_evidence": [ "To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available.", "This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "two", "two", "two" ], "paper_read": [ "no", "no", "no" ], "question": [ "Which of the model yields the best performance?", "What is the performance of the models on the tasks?", "How is the data automatically generated?" ], "question_id": [ "4d5f112874250d48eb49649c4abe31d6c9236700", "8985ead714236458a7496075bc15054df0e3234e", "49aecc50823a60c852165e121dbc0ca54304e40f" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "morphology", "morphology", "morphology" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [], "file": [] }
[ "What is the performance of the models on the tasks?" ]
[ [ "1912.00582-Results ::: Overall Results-0", "1912.00582-Results-0" ] ]
[ "Overall accuracy per model is: 5-gram (60.5), LSTM (68.9), TXL (68.7), GPT-2 (80.1)" ]
595
1705.10586
Character-Based Text Classification using Top Down Semantic Model for Sentence Representation
Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other character-based and word-based convolutional neural network models by \cite{zhang15} across seven different datasets with only 1\% of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender.
{ "paragraphs": [ [ "Recently, deep learning has been particularly successful in speech and image as an automatic feature extractor BIBREF1 , BIBREF2 , BIBREF3 , however deep learning's application to text as an automatic feature extractor has not been always successful BIBREF0 even compared to simple linear models with BoW or TF-IDF feature representation. In many experiments when the text is polished like news articles or when the dataset is small, BoW or TF-IDF is still the state-of-art representation compared to sent2vec or paragraph2vec BIBREF4 representation using deep learning models like RNN (Recurrent Neural Network) or CNN (Convolution Neural Network) BIBREF0 . It is only when the dataset becomes large or when the words are noisy and non-standardized with misspellings, text emoticons and short-forms that deep learning models which learns the sentence-level semantics start to outperform BoW representation, because under such circumstances, BoW representation can become extremely sparse and the vocabulary size can become huge. It becomes clear that for large, complex data, a large deep learning model with a large capacity can extract a better sentence-level representation than BoW sentence representation. However, for small and standardized news-like dataset, a direct word counting TF-IDF sentence representation is superior. Then the question is can we design a deep learning model that performs well for both simple and complex, small and large datasets? And when the dataset is small and standardized, the deep learning model should perform comparatively well as BoW? With that problem in mind, we designed TDSM (Top-Down-Semantic-Model) which learns a sentence representation that carries the information of both the BoW-like representation and RNN style of sentence-level semantic which performs well for both simple and complex, small and large datasets.", "Getting inspiration from the success of TF-IDF representation, our model intends to learn a word topic-vector which is similar to TF-IDF vector of a word but is different from word embedding, whereby the values in the topic-vector are all positives, and each dimension of the topic-vector represents a topic aspect of the word. Imagine a topic-vector of representation meaning $[animal, temperature, speed]$ , so a $rat$ maybe represented as $[0.9, 0.7, 0.2]$ since $rat$ is an animal with high body temperature but slow running speed compared to a $car$ which maybe represented as $[0.1, 0.8, 0.9]$ for being a non-animal, but high engine temperature and fast speed. A topic-vector will have a much richer semantic meaning than one-hot TF-IDF representation and also, it does not have the cancellation effect of summing word-embeddings positional vectors $([-1, 1] + [1, -1] = [0, 0])$ . The results from BIBREF5 show that by summing word-embedding vectors as sentence representation will have a catastrophic result for text classification.", "Knowing the topic-vector of each word, we can combine the words into a sentence representation $\\tilde{s}$ by learning a weight $w_i$ for each word ${v_i}$ and do a linear sum of the words, $\\tilde{s} = \\sum _i {w_i}\\tilde{v_i}$ . The weights $w_i$ for each word in the sentence summation is learnt by recurrent neural network (RNN) BIBREF6 with attention over the words BIBREF7 . The weights corresponds to the IDF (inverse document frequency) in TF-IDF representation, but with more flexibility and power. IDF is fixed for each word and calculated from all the documents (entire dataset), however attention weights learned from RNN is conditioned on both the document-level and dataset-level semantics. This sentence representation from topic-vector of each word is then concatenated with the sentence-level semantic vector from RNN to give a top-down sentence representation as illustrated in Figure 2 ." ], [ "TDSM is a framework that can be applied to both word-level or character-level inputs. Here in this paper, we choose character-level over word-level inputs for practical industry reasons.", "In industry applications, often the model is required to have continuous learning on datasets that morph over time. Which means the vocabulary may change over time, therefore feeding the dataset by characters dispel the need for rebuilding a new vocabulary every time when there are new words.", "Industry datasets are usually very complex and noisy with a large vocabulary, therefore the memory foot-print of storing word embeddings is much larger than character embeddings.", "Therefore improving the performance of a character-based model has a much larger practical value compared to word-based model." ], [ "There are many traditional machine learning methods for text classification and most of them could achieve quite good results on formal text datasets. Recently, many deep learning methods have been proposed to solve the text classification task BIBREF0 , BIBREF9 , BIBREF10 .", "Deep convolutional neural network has been extremely successful for image classification BIBREF11 , BIBREF12 . Recently, many research also tries to apply it on text classification problem. Kim BIBREF10 proposed a model similar to Collobert's et al. BIBREF13 architecture. However, they employ two channels of word vectors. One is static throughout training and the other is fine-tuned via back-propagation. Various size of filters are applied on both channels, and the outputs are concatenated together. Then max-pooling over time is taken to select the most significant feature among each filter. The selected features are concatenated as the sentence vector.", "Similarly, Zhang et al. BIBREF0 also employs the convolutional networks but on characters instead of words for text classification. They design two networks for the task, one large and one small. Both of them have nine layers including six convolutional layers and three fully-connected layers. Between the three fully connected layers they insert two dropout layers for regularization. For both convolution and max-pooling layers, they employ 1-D filters BIBREF14 . After each convolution, they apply 1-D max-pooling. Specially, they claim that 1-D max-pooling enable them to train a relatively deep network.", "Besides applying models directly on testing datasets, more aspects are considered when extracting features. Character-level feature is adopted in many tasks besides Zhang et al. BIBREF0 and most of them achieve quite good performance.", "Santos and Zadrozny BIBREF15 take word morphology and shape into consideration which have been ignored for part-of-speech tagging task. They suggest the intra-word information is extremely useful when dealing with morphologically rich languages. They adopt neural network model to learn the character-level representation which is further delivered to help word embedding learning.", "Kim et al. BIBREF16 constructs neural language model by analysis of word representation obtained from character composition. Results suggest that the model could encode semantic and orthographic information from character level.", " BIBREF17 , BIBREF7 uses two hierarchies of recurrent neural network to extract the document representation. The lower hierarchical recurrent neural network summarizes a sentence representation from the words in the sentence. The upper hierarchical neural network then summarizes a document representation from the sentences in the document. The major difference between BIBREF17 and BIBREF7 is that Yang applies attention over outputs from the recurrent when learning a summarizing representation.", "Attention model is also utilized in our model, which is used to assign weights for each word. Usually, attention is used in sequential model BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . The attention mechanism includes sensor, internal state, actions and reward. At each time-step, the sensor will capture a glimpse of the input which is a small part of the entire input. Internal state will summarize the extracted information. Actions will decide the next glimpse location for the next step and reward suggests the benefit when taking the action. In our network, we adopt a simplified attention network as BIBREF22 , BIBREF23 . We learn the weights over the words directly instead of through a sequence of actions and rewards.", "Residual network BIBREF3 , BIBREF24 , BIBREF25 is known to be able to make very deep neural networks by having skip-connections that allows gradient to back-propagate through the skip-connections. Residual network in BIBREF3 outperforms the state-of-the-art models on image recognition. He BIBREF24 introduces residual block as similar to feature refinement for image classification. Similarly, for text classification problem, the quality of sentence representation is also quite important for the final result. Thus, we try to adopt the residual block as in BIBREF3 , BIBREF24 to refine the sentence vector." ], [ "The entire model has only 780,000 parameters which is only 1% of the parameters in BIBREF0 large CNN model. We used BiLSTM BIBREF27 with 100 units in both forward and backward LSTM cell. The output from the BiLSTM is 200 dimensions after we concatenate the outputs from the forward and backward cells. We then use attention over the words by linearly transform 200 output dimensions to 1 follow by a softmax over the 1 dimension outputs from all the words. After the characters to topic-vector transformation by FCN, each topic vector will be 180 dimensions. The topic-vectors are then linearly sum with attention weights to form a 180 dimensions BoW-like sentence vector. This vector is further concatenate with 200 dimensions BiLSTM outputs. The 380 dimensions undergo 10 blocks of ResNet BIBREF25 plus one fully connected layer. We use RELU BIBREF29 for all the intra-layer activation functions. The source code will be released after some code refactoring and we built the models with tensorflow BIBREF30 and tensorgraph ." ], [ "Unlike word-embedding BIBREF26 , topic-vector tries to learn a distributed topic representation at each dimension of the representation vector, which thus allows the simple addition of the word-level topic-vectors to form a sentence representation. Figure 1 illustrates how topic-vector is extracted from characters in words using FCN (Fully Convolutional Network). In order to force word level representation with topic meanings, we apply a sigmoid function over the output from FCN. Doing so, restrain the values at each dimension to be between 0 and 1, thus forcing the model to learn a distributed topic representation of the word." ], [ "Forming a sentence representation from words can be done simply by summing of the word-embeddings which produce catastrophic results BIBREF5 due to the cancellation effect of adding embedding vectors (negative plus positive gives zero). Or in our model, the summing of word-level topic-vectors which give a much better sentence representation as shown in Table 3 than summing word-embeddings.", "Sentence vector derived from summing of the word topic-vectors is equivalent to the BoW vectors in word counting, whereby we treat the prior contribution of each word to the final sentence vector equally. Traditionally, a better sentence representation over BoW will be TF-IDF, which gives a weight to each word in a document in terms of IDF (inverse document frequency). Drawing inspiration from TF-IDF representation, we can have a recurrent neural network that outputs the attention BIBREF7 over the words. And the attention weights serve similar function as the IDF except that it is local to the context of the document, since the attention weight for a word maybe different for different documents while the IDF of a word is the same throughout all documents. With the attention weights $w_i$ and word topic-vector $\\tilde{v}_i$ , we can form a sentence vector $\\tilde{s}_{bow}$ by linear sum ", "$$\\tilde{s}_{bow} = \\sum _i w_i \\tilde{v}_i$$ (Eq. 10) ", "With the neural sentence vector that derived from BoW which captures the information of individual words. We can also concatenate it with the output state from the RNN which captures the document level information and whose representation is conditioned on the positioning of the words in the document.", "$$&\\tilde{s}_{t} = RNN(\\tilde{v}_{t}, \\tilde{s}_{t-1}) \\\\\n&\\tilde{s}_{pos} = \\tilde{s}_T \\\\\n\n&\\tilde{s} = \\tilde{s}_{bow} \\oplus \\tilde{s}_{pos}$$ (Eq. 12) ", " where $T$ is the length of the document, and $\\oplus $ represents concatenation such that $|\\tilde{s}| = |\\tilde{s}_{bow}| + |\\tilde{s}_{pos}|$ . The overall sentence vector $\\tilde{s}$ will then capture the information of both the word-level and document-level semantics of the document. And thus it has a very rich representation.", "We used Bi-directional LSTM (BiLSTM) BIBREF27 as the recurrent unit. BiLSTM consist of a forward LSTM (FLSTM) and a backward LSTM (BLSTM), both LSTMs are of the same design, except that FLSTM reads the sentence in a forward manner and BLSTM reads the sentence in a backward manner. One recurrent step in LSTM of Equation 12 consists of the following steps ", "$$\\tilde{f}_t &= \\sigma \\big (\\mathbf {W}_f (\\tilde{s}_{t-1} \\oplus \\tilde{v}_t) + \\tilde{b}_f \\big ) \\\\\n\\tilde{i}_t &= \\sigma \\big (\\mathbf {W}_i (\\tilde{s}_{t-1} \\oplus \\tilde{v}_t) + \\tilde{b}_i\\big ) \\\\\n\\tilde{C}_t &= \\tanh \\big (\\mathbf {W}_C(\\tilde{s}_{t-1}, \\tilde{v}_t) + \\tilde{b}_C\\big ) \\\\\n\\tilde{C}_t &= \\tilde{f}_t \\otimes \\tilde{C}_{t-1} + \\tilde{i}_t \\otimes \\tilde{C}_t \\\\\n\\tilde{o}_t &= \\sigma \\big (\\mathbf {W}_o (\\tilde{s}_{t-1} \\oplus \\tilde{v}_t) + \\tilde{b}_o \\big ) \\\\\n\\tilde{s}_t &= \\tilde{o}_t \\otimes \\tanh (\\tilde{C}_t)$$ (Eq. 14) ", " where $\\otimes $ is the element-wise vector multiplication, $\\oplus $ is vector concatenation similarly defined in Equation . $\\tilde{f}_t$ is forget state, $\\tilde{i}_t$ is input state, $\\tilde{o}_t$ is output state, $\\tilde{C}_t$ is the internal context which contains the long-short term memory of historical semantics that LSTM reads. Finally, the output from the BiLSTM will be a concatenation of the output from FLSTM and BLSTM ", "$$\\tilde{f}_t &= \\text{FLSTM}(\\tilde{v}_t, \\tilde{s}_{t-1}) \\\\\n\\tilde{b}_t &= \\text{BLSTM}(\\tilde{v}_t, \\tilde{s}_{t+1}) \\\\\n\\tilde{h}_{t} &= \\tilde{f}_t \\oplus \\tilde{b}_t$$ (Eq. 15) ", "Here, the concatenated output state of BiLSTM has visibility of the entire sequence at any time-step compared to single-directional LSTM which only has visibility of the sequence in the past. This property of BiLSTM is very useful for learning attention weights for each word in a document because then the weights are decided based on the information of the entire document instead of just words before it as in LSTM." ], [ "We use the standard benchmark datasets prepare by BIBREF0 . The datasets have different number of training samples and test samples ranging from 28,000 to 3,600,000 training samples, and of different text length ranging from average of 38 words for Ag News to 566 words in Sogou news as illustrated in Table 1 . The datasets are a good mix of polished (AG) and noisy (Yelp and Amazon reviews), long (Sogou) and short (DBP and AG), large (Amazon reviews) and small (AG) datasets. And thus the results over these datasets serve as good evaluation on the quality of the model." ], [ "In this paper, we take 128 ASCII characters as character set, by which most of the testing documents are composite. We define word length as 20 and character embedding length as 100. If a word with characters less than 20, we will pad it with zeros. If the length is larger than 20, we just take the first 20 characters. We set the maximum length of words as the average number of words of the documents in the dataset plus two standard deviation, which is long enough to cover more than 97.5% of the documents. For documents with number of words more than the preset maximum number of words, we will discard the exceeding words." ], [ "We select both traditional models and the convolutional models from BIBREF0 , the recurrent models from BIBREF7 , BIBREF17 as baselines. Also in order to ensure a fair comparison of models, such that any variation in the result is purely due to the model difference, we compare TDSM only with models that are trained in the same way of data preparation, that is the words are lowered and there are no additional data alteration or augmentation with thesaurus. Unfortunately, BIBREF7 , BIBREF17 recurrent models are trained on full text instead of lowered text, so their models may not be objectively compared to our models, since it is well known from BIBREF28 that different text preprocessing will have significant impact on the final results, Zhang's result shows that a simple case lowering can result up to 4% difference in classification accuracy. Despite this, we still include the recurrent models for comparison, because they provide a good reference for understanding time-based models on large datasets of long sentences.", "is the standard word counting method whereby the feature vector represents the term frequency of the words in a sentence.", "is similar to BoW, except that it is derived by the counting of the words in the sentence weighted by individual word's term-frequency and inverse-document-frequency BIBREF31 . This is a very competitive model especially on clean and small dataset.", "is derived from clustering of words-embeddings with k-means into 5000 clusters, and follow by BoW representation of the words in 5000 clusters.", "is CNN model on word embeddings following BIBREF0 , to ensure fair comparison with character-based models, the CNN architecture is the same as Lg. Conv and Sm. Conv with the same number of parameters.", "from BIBREF17 is basically a recurrent neural network based on LSTM and GRU BIBREF32 over the words in a sentence, and over the sentences in a document. It tries to learn a hierarchical representation of the text from multi-levels of recurrent layers.", "from BIBREF7 is basically similar to LSTM-GRNN except that instead of just learning the hierarchical representation of the text directly with RNN, it also learns attention weights over the words during the summarization of the words and over the sentences during the summarization of the sentences.", "are proposed in BIBREF0 , which is a CNN model on character encoding and is the primary character-based baseline model that we are comparing with." ], [ "Table 3 shows the comparison results of different datasets with different size, different sentence length, and different quality (polished AG news vs messy Yelp and Amazon reviews).", "From the results, we see that TDSM out-performs all the other CNN models across all the datasets with only 1% of the parameters of Zhang's large conv model and 7.8% of his small conv model. Since these results are based on the same text preprocessing and across all kinds of dataset (long, short, large, small, polished, messy), we can confidently say that TDSM generalizes better than the other CNN models over text classification. These results show that a good architecture design can achieve a better accuracy with significantly less parameters.", "Character-based models are the most significant and practical model for real large scale industry deployment because of its smaller memory footprint, agnostic to changes in vocabulary and robust to misspellings BIBREF16 . For a very long time, TF-IDF has been state-of-art models especially in small and standardized datasets. However because of its large memory footprint and non-suitability for continuous learning (because a new vocabulary has to be rebuilt every once in awhile when there are new words especially for data source like Tweeter), it was not an ideal model until character-based models came out. From the results, previous character-based models are generally better than TF-IDF for large datasets but falls short for smaller dataset like AG news. TDSM successfully close the gap between character-based models and TF-IDF by beating TF-IDF with 1% better performance. The results also confirm the hypothesis that TDSM as illustrated in Figure 2 which contains both the BoW-like and sentence-level features, has the best of the traditional TF-IDF and the recent deep learning model, is able to perform well for both small and large datasets.", "From the results, we also observe that TDSM improves over other character-based models by a big margin of 3% for Lg. Conv and 5.7% for Sm. Conv on the AG dataset. But the improvement tails off to only 0.5% for Amazon reviews when the dataset size increases from 120,000 to 3.6 million. This is probably because TDSM has reached its maximum capacity when the dataset gets very large compared to other character-based models which have 100 times the capacity of TDSM.", "For Yelp Full, we observe that the hierarchical recurrent models LSTM-GRNN and HN-ATT performs about 10% points better than TDSM but drops to only 3% for Amazon Full. This may be partly due to their data being prepared differently from our models. This can also be due to the structure of these hierarchical recurrent models which has two levels of recurrent neural networks for summarizing a document, whereby the first level summarizes a sentence vector from the words and the second level summarizes a document vector from the sentences. So these models will start to perform much better when there are alot of sentences and words in a document. For Yelp Full, there are of average 134 words in one document and Amazon Full has about 80 words per document. That's why the performance is much better for these recurrent models on Yelp than on Amazon. However, these hierarchical recurrent models will be reduce to a purely vanilla RNN for short text like AG News or Tweets with a few sentences, and under such circumstances its result will not be much different from a standard RNN. Nevertheless, LSTM-GRNN or HN-ATT does indicate the strength of RNN models in summarizing the sentences and documents and deriving a coherent sentence-level and document-level representation." ], [ "From the results, we see a strong promise for TDSM as a competitive model for text classification because of its hybrid architecture that looks at the sentence from both the traditional TF-IDF point of view and the recent deep learning point of view. The results show that this type of view can derive a rich text representation for both small and large datasets." ] ], "section_name": [ "Introduction", "TDSM on Characters", "Related Work", "Model", "Characters to Topic-Vector", "Sentence Representation Vector Construction", "Datasets", "Word Setting", "Baseline Models", "Results", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "587358249a7de0e5f3d815eb49e85fd171d99617" ], "answer": [ { "evidence": [ "is the standard word counting method whereby the feature vector represents the term frequency of the words in a sentence.", "is similar to BoW, except that it is derived by the counting of the words in the sentence weighted by individual word's term-frequency and inverse-document-frequency BIBREF31 . This is a very competitive model especially on clean and small dataset.", "is derived from clustering of words-embeddings with k-means into 5000 clusters, and follow by BoW representation of the words in 5000 clusters." ], "extractive_spans": [], "free_form_answer": "bag of words, tf-idf, bag-of-means", "highlighted_evidence": [ "is the standard word counting method whereby the feature vector represents the term frequency of the words in a sentence.", "is similar to BoW, except that it is derived by the counting of the words in the sentence weighted by individual word's term-frequency and inverse-document-frequency BIBREF31 .", "is derived from clustering of words-embeddings with k-means into 5000 clusters, and follow by BoW representation of the words in 5000 clusters." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "" ], "paper_read": [ "no" ], "question": [ "What other non-neural baselines do the authors compare to? " ], "question_id": [ "2df2f6e4efd19023434c84f5b4f29a2f00bfc9fb" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "representation" ], "topic_background": [ "unfamiliar" ] }
{ "caption": [ "Figure 1: Illustration of transformation from character-level embeddings to word-level topic-vector. The transformation is done with fully convolutional network (FCN) similar to (Long et al., 2015), each hierarchical level of the FCN will extract an n-gram character feature of the word until the word-level topic-vector.", "Figure 2: TDSM: Illustration of how topic-vector of the words are combined together into a sentence representation. Note that for the actual model, we are using BiLSTM for extracting positional features. In this diagram, in order to present our idea in a neater manner, we demonstrate with a vanilla RNN.", "Table 1: Statistics of datasets.", "Table 2: Fully-Convolutional Network from characters to topic-vector. The first convolutional layer has kernel size of (100, 4) where 100 is the embedding size over 4-gram character as shown in Figure 1.", "Table 3: Comparison results on accuracy for various models. Lg w2v Conv and Sm. w2v Conv is CNN on word embedding. Lg. Conv and Sm. Conv is CNN on character embedding. LSTM-GRNN and HN-ATT are different species of recurrent neural networks on words and sentences. Unfortunately, these two RNN models did not use the same text preprocessing technique as other models, so their models may not be objectively comparable to Zhang’s or our model, because it is well known that (Zhang et al., 2015; Uysal & Gunal, 2014), the difference in text preprocessing will have a significant impact on the final accuracy. However, these RNN models are still a good reference for our understanding of time-based models on large datasets of long sentences.", "Table 4: Number of parameters for different models" ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Table4-1.png" ] }
[ "What other non-neural baselines do the authors compare to? " ]
[ [ "1705.10586-Baseline Models-2", "1705.10586-Baseline Models-1", "1705.10586-Baseline Models-3" ] ]
[ "bag of words, tf-idf, bag-of-means" ]
598
1909.01958
From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge. ::: This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern NLP methods can result in mastery on this task. While not a full solution to general question-answering (the questions are multiple choice, and the domain is restricted to 8th Grade science), it represents a significant milestone for the field.
{ "paragraphs": [ [ "This paper reports on the history, progress, and lessons from the Aristo project, a six-year quest to answer grade-school and high-school science exams. Aristo has recently surpassed 90% on multiple choice questions from the 8th Grade New York Regents Science Exam (see Figure FIGREF6). We begin by offering several perspectives on why this achievement is significant for NLP and for AI more broadly." ], [ "In 1950, Alan Turing proposed the now well-known Turing Test as a possible test of machine intelligence: If a system can exhibit conversational behavior that is indistinguishable from that of a human during a conversation, that system could be considered intelligent (BID1). As the field of AI has grown, the test has become less meaningful as a challenge task for several reasons. First, its setup is not well defined (e.g., who is the person giving the test?). A computer scientist would likely know good distinguishing questions to ask, while a random member of the general public may not. What constraints are there on the interaction? What guidelines are provided to the judges? Second, recent Turing Test competitions have shown that, in certain formulations, the test itself is gameable; that is, people can be fooled by systems that simply retrieve sentences and make no claim of being intelligent (BID2;BID3). John Markoff of The New York Times wrote that the Turing Test is more a test of human gullibility than machine intelligence. Finally, the test, as originally conceived, is pass/fail rather than scored, thus providing no measure of progress toward a goal, something essential for any challenge problem.", "Instead of a binary pass/fail, machine intelligence is more appropriately viewed as a diverse collection of capabilities associated with intelligent behavior. Finding appropriate benchmarks to test such capabilities is challenging; ideally, a benchmark should test a variety of capabilities in a natural and unconstrained way, while additionally being clearly measurable, understandable, accessible, and motivating.", "Standardized tests, in particular science exams, are a rare example of a challenge that meets these requirements. While not a full test of machine intelligence, they do explore several capabilities strongly associated with intelligence, including language understanding, reasoning, and use of common-sense knowledge. One of the most interesting and appealing aspects of science exams is their graduated and multifaceted nature; different questions explore different types of knowledge, varying substantially in difficulty. For this reason, they have been used as a compelling—and challenging—task for the field for many years (BID4;BID5)." ], [ "With the advent of contextualized word-embedding methods such as ELMo (BID6), BERT (BID7), and most recently RoBERTa (BID8), the NLP community's benchmarks are being felled at a remarkable rate. These are, however, internally-generated yardsticks, such as SQuAD (BID9), Glue (BID10), SWAG (BID11), TriviaQA (BID12), and many others.", "In contrast, the 8th Grade science benchmark is an external, independently-generated benchmark where we can compare machine performance with human performance. Moreover, the breadth of the vocabulary and the depth of the questions is unprecedented. For example, in the ARC question corpus of science questions, the average question length is 22 words using a vocabulary of over 6300 distinct (stemmed) words (BID13). Finally, the questions often test scientific knowledge by applying it to everyday situations and thus require aspects of common sense. For example, consider the question: Which equipment will best separate a mixture of iron filings and black pepper? To answer this kind of question robustly, it is not sufficient to understand magnetism. Aristo also needs to have some model of “black pepper\" and “mixture\" because the answer would be different if the iron filings were submerged in a bottle of water. Aristo thus serves as a unique “poster child\" for the remarkable and rapid advances achieved by leveraging contextual word-embedding models in, NLP." ], [ "Within NLP, machine understanding of textbooks is a grand AI challenge that dates back to the '70s, and was re-invigorated in Raj Reddy's 1988 AAAI Presidential Address and subsequent writing (BID14;BID15). However, progress on this challenge has a checkered history. Early attempts side-stepped the natural language understanding (NLU) task, in the belief that the main challenge lay in problem-solving. For example, Larkin1980ModelsOC manually encoded a physics textbook chapter as a set of rules that could then be used for question answering. Subsequent attempts to automate the reading task were unsuccessful, and the language task itself has emerged as a major challenge for AI.", "In recent years there has been substantial progress in systems that can find factual answers in text, starting with IBM's Watson system (BID16), and now with high-performing neural systems that can answer short questions provided they are given a text that contains the answer (BID17;BID18). The work presented here continues along this trajectory, but aims to also answer questions where the answer may not be written down explicitly. While not a full solution to the textbook grand challenge, this work is thus a further step along this path." ], [ "Project Aristo emerged from the late Paul Allen's long-standing dream of a Digital Aristotle, an “easy-to-use, all-encompassing knowledge storehouse...to advance the field of AI.” (BID19). Initially, a small pilot program in 2003 aimed to encode 70 pages of a chemistry textbook and answer the questions at the end of the chapter. The pilot was considered successful (BID20), with the significant caveat that both text and questions were manually encoded, side-stepping the natural language task, similar to earlier efforts. A subsequent larger program, called Project Halo, developed tools allowing domain experts to rapidly enter knowledge into the system. However, despite substantial progress (BID21;BID22), the project was ultimately unable to scale to reliably acquire textbook knowledge, and was unable to handle questions expressed in full natural language.", "In 2013, with the creation of the Allen Institute for Artificial Intelligence (AI2), the project was rethought and relaunched as Project Aristo (connoting Aristotle as a child), designed to avoid earlier mistakes. In particular: handling natural language became a central focus; Most knowledge was to be acquired automatically (not manually); Machine learning was to play a central role; questions were to be answered exactly as written; and the project restarted at elementary-level science (rather than college-level) (BID23).", "The metric progress of the Aristo system on the Regents 8th Grade exams (non-diagram, multiple choice part, for a hidden, held-out test set) is shown in Figure FIGREF6. The figure shows the variety of techniques attempted, and mirrors the rapidly changing trajectory of the Natural Language Processing (NLP) field in general. Early work was dominated by information retrieval, statistical, and automated rule extraction and reasoning methods (BID24;BID25;BID26;BID27;BID28). Later work has harnessed state-of-the-art tools for large-scale language modeling and deep learning (BID29;BID30), which have come to dominate the performance of the overall system and reflects the stunning progress of the field of NLP as a whole." ], [ "We now describe the architecture of Aristo, and provide a brief summary of the solvers it uses." ], [ "The current configuration of Aristo comprises of eight solvers, described shortly, each of which attempts to answer a multiple choice question. To study particular phenomena and develop solvers, the project has created larger datasets to amplify and study different problems, resulting in 10 new datasets and 5 large knowledge resources for the community.", "The solvers can be loosely grouped into:", "Statistical and information retrieval methods", "Reasoning methods", "Large-scale language model methods", "Over the life of the project, the relative importance of the methods has shifted towards large-scale language methods.", "Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus ($5 \\times 10^{10}$ tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts (BID25)." ], [ "Three solvers use information retrieval (IR) and statistical measures to select answers. These methods are particularly effective for “lookup” questions where an answer is explicitly stated in the Aristo corpus.", "The IR solver searches to see if the question along with an answer option is explicitly stated in the corpus, and returns the confidence that such a statement was found. To do this, for each answer option $a_i$, it sends $q$ + $a_i$ as a query to a search engine (we use ElasticSearch), and returns the search engine’s score for the top retrieved sentence $s$, where $s$ also has at least one non-stopword overlap with $q$, and at least one with $a_i$. This ensures $s$ has some relevance to both $q$ and $a_i$. This is repeated for all options $a_i$ to score them all, and the option with the highest score selected. Further details are available in (BID25).", "The PMI solver uses pointwise mutual information (BID31) to measure the strength of the associations between parts of $q$ and parts of $a_i$. Given a large corpus $C$, PMI for two n-grams $x$ and $y$ is defined as $\\mathrm {PMI}(x,y) = \\log \\frac{p(x,y)}{p(x) p(y)}$. Here $p(x,y)$ is the joint probability that $x$ and $y$ occur together in $C$, within a certain window of text (we use a 10 word window). The term $p(x) p(y)$, on the other hand, represents the probability with which $x$ and $y$ would occur together if they were statistically independent. The ratio of $p(x,y)$ to $p(x) p(y)$ is thus the ratio of the observed co-occurrence to the expected co-occurrence. The larger this ratio, the stronger the association between $x$ and $y$. The solver extracts unigrams, bigrams, trigrams, and skip-bigrams from the question $q$ and each answer option $a_i$. It outputs the answer with the largest average PMI, calculated over all pairs of question n-grams and answer option n-grams. Further details are available in (BID25).", "Finally, ACME (Abstract-Concrete Mapping Engine) searches for a cohesive link between a question $q$ and candidate answer $a_{i}$ using a large knowledge base of vector spaces that relate words in language to a set of 5000 scientific terms enumerated in a term bank. ACME uses three types of vector spaces: terminology space, word space, and sentence space. Terminology space is designed for finding a term in the term bank that links a question to a candidate answer with strong lexical cohesion. Word space is designed to characterize a word by the context in which the word appears. Sentence space is designed to characterize a sentence by the words that it contains. The key insight in ACME is that we can better assess lexical cohesion of a question and answer by pivoting through scientific terminology, rather than by simple co-occurence frequencies of question and answer words. Further details are provided in (BID32).", "These solvers together are particularly good at “lookup” questions where an answer is explicitly written down in the Aristo Corpus. For example, they correctly answer:", "Infections may be caused by (1) mutations (2) microorganisms [correct] (3) toxic substances (4) climate changes", "as the corpus contains the sentence “Products contaminated with microorganisms may cause infection.” (for the IR solver), as well as many other sentences mentioning both “infection” and “microorganisms” together (hence they are highly correlated, for the PMI solver), and both words are strongly correlated with the term “microorganism” (ACME)." ], [ "The TupleInference solver uses semi-structured knowledge in the form of tuples, extracted via Open Information Extraction (Open IE) (BID33). Two sources of tuples are used:", "A knowledge base of 263k tuples ($T$), extracted from the Aristo Corpus plus several domain-targeted sources, using training questions to retrieve science-relevant information.", "On-the-fly tuples ($T^{\\prime }$), extracted at question-answering time from t<he same corpus, to handle questions from new domains not covered by the training set.", "TupleInference treats the reasoning task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure FIGREF15 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) (BID34), however, we must score alignments between the tuples retrieved from the two sources above, $T_{\\mathit {qa}} \\cup T^{\\prime }_{\\mathit {qa}}$, and a (potentially multi-sentence) multiple choice question $qa$.", "The qterms, answer choices, and tuples fields (i.e. subject, predicate, objects) form the set of possible vertices, $\\mathcal {V}$, of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\\mathcal {E}$. The support graph, $G(V, E)$, is a subgraph of $\\mathcal {G}(\\mathcal {V}, \\mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, respectively. We define an ILP optimization model to search for the best support graph (i.e., the active nodes and edges), where a set of constraints define the structure of a valid support graph (e.g., an edge must connect an answer choice to a tuple) and the objective defines the preferred properties (e.g. active edges should have high word-overlap). Details of the constraints are given in (BID27). We then use the SCIP ILP optimization engine (BID35) to solve the ILP model. To obtain the score for each answer choice $a_i$, we force the node for that choice $x_{a_i}$ to be active and use the objective function value of the ILP model as the score. The answer choice with the highest score is selected. Further details are available in (BID27).", "Multee (BID29) is a solver that repurposes existing textual entailment tools for question answering. Textual entailment (TE) is the task of assessing if one text implies another, and there are several high-performing TE systems now available. However, question answering often requires reasoning over multiple texts, and so Multee learns to reason with multiple individual entailment decisions. Specifically, Multee contains two components: (i) a sentence relevance model, which learns to focus on the relevant sentences, and (ii) a multi-layer aggregator, which uses an entailment model to obtain multiple layers of question-relevant representations for the premises and then composes them using the sentence-level scores from the relevance model. Finding relevant sentences is a form of local entailment between each premise and the answer hypothesis, whereas aggregating question-relevant representations is a form of global entailment between all premises and the answer hypothesis. This means we can effectively repurpose the same pre-trained entailment function $f_e$ for both components. Details of how this is done are given in (BID29). An example of a typical question and scored, retrieved evidence is shown in Figure FIGREF18. Further details are available in (BID29).", "The QR (qualitative reasoning) solver is designed to answer questions about qualitative influence, i.e., how more/less of one quantity affects another (see Figure FIGREF19). Unlike the other solvers in Aristo, it is a specialist solver that only fires for a small subset of questions that ask about qualitative change, identified using (regex) language patterns.", "The solver uses a knowledge base $K$ of 50,000 (textual) statements about qualitative influence, e.g., “A sunscreen with a higher SPF protects the skin longer.”, extracted automatically from a large corpus. It has then been trained to apply such statements to qualitative questions, e.g.,", "John was looking at sunscreen at the retail store. He noticed that sunscreens that had lower SPF would offer protection that is (A) Longer (B) Shorter [correct]", "In particular, the system learns through training to track the polarity of influences: For example, if we were to change “lower” to “higher” in the above example, the system will change its answer choice. Another example is shown in Figure FIGREF19. Again, if “melted” were changed to “cooled”, the system would change its choice to “(B) less energy”.", "The QR solver learns to reason using the BERT language model (BID7), using the approach described in Section SECREF21 below. It is fine-tuned on 3800 crowdsourced qualitative questions illustrating the kinds of manipulation required, along with the associated qualitative knowledge sentence. The resulting system is able to answer questions that include significant linguistic and knowledge gaps between the question and retrieved knowledge (Table TABREF20).", "Because the number of qualitative questions is small in our dataset, the solver does not significantly change Aristo's performance, although it does provide an explanation for its answers. For this reason we omit it in the results later. Further details and a detailed separate evaluation is available in (BID36)." ], [ "The field of NLP has advanced substantially with the advent of large-scale language models such as ELMo (BID6), ULMFit (BID37), GPT (BID38), BERT (BID7), and RoBERTa (BID8). These models are trained to perform various language prediction tasks such as predicting a missing word or the next sentence, using large amounts of text (e.g., BERT was trained on Wikipedia + the Google Book Corpus of 10,000 books). They can also be fine-tuned to new language prediction tasks, such as question-answering, and have been remarkably successful in the few months that they have been available.", "We apply BERT to multiple choice questions by treating the task as classification: Given a question $q$ with answer options $a_{i}$ and optional background knowledge $K_{i}$, we provide it to BERT as:", "[CLS] $K_i$ [SEP] $q$ [SEP] $a_{i}$ [SEP]", "for each option (only the answer option is assigned as the second BERT \"segment\"). The [CLS] output token for each answer option is projected to a single logit and fed through a softmax layer, trained using cross-entropy loss against the correct answer.", "The AristoBERT solver uses three methods to apply BERT more effectively. First, we retrieve and supply background knowledge along with the question when using BERT. This provides the potential for BERT to “read” that background knowledge and apply it to the question, although the exact nature of how it uses background knowledge is more complex and less interpretable. Second, we fine-tune BERT using a curriculum of several datasets, including some that are not science related. Finally, we ensemble different variants of BERT together.", "For background knowledge $K_i$ we use up to 10 of the top sentences found by the IR solver, truncated to fit into the BERT max tokens setting (we use 256).", "Following earlier work on multi-step fine-tuning (BID39), we first fine-tune on the large (87866 qs) RACE training set (BID40), a challenging set of English comprehension multiple choice exams given in Chinese middle and high schools.", "We then further fine-tune on a collection of science multiple choice questions sets:", "OpenBookQA train (4957 qs) (BID41)", "ARC-Easy train (2251 qs) (BID13)", "ARC-Challenge train (1119 qs) (BID13)", "22 Regents Living Environment exams (665 qs).", "We optimize the final fine-tuning using scores on the development set, performing a small hyperparameter search as suggested in the original BERT paper (BID7).", "We repeat the above using three variants of BERT, the original BERT-large-cased and BERT-large-uncased, as well as the later released BERT-large-cased-whole-word-masking. We also add a model trained without background knowledge and ensemble them using the combination solver described below.", "The AristoRoBERTa solver takes advantage of the recent release of Roberta (BID8), a high-performing and optimized derivative of BERT trained on significantly more text. In AristoRoBERTa, we simply replace the BERT model in AristoBERT with RoBERTa, repeating similar fine-tuning steps. We ensemble two versions together, namely with and without the first fine-tuning step using RACE." ], [ "Each solver outputs a non-negative confidence score for each of the answer options along with other optional features. The Combiner then produces a combined confidence score (between 0 and 1) using the following two-step approach.", "In the first step, each solver is “calibrated” on the training set by learning a logistic regression classifier from each answer option to a correct/incorrect label. The features for an answer option $i$ include the raw confidence score $s_i$ as well as the score normalized across the answer options for a given question. We include two types of normalizations:", "Each solver can also provide other features capturing aspects of the question or the reasoning path. The output of this first step classifier is then a calibrated confidence for each solver $s$ and answer option $i$: $ \\mathit {calib}^s_i = 1/(1+\\exp (- \\beta ^s \\cdot f^s)) $ where $f^s$ is the solver specific feature vector and $\\beta ^s$ the associated feature weights.", "The second step uses these calibrated confidences as (the only) features to a second logistic regression classifier from answer option to correct/incorrect, resulting in a final confidence in $[0,1]$, which is used to rank the answers:", "Here, feature weights $\\beta ^s$ indicate the contribution of each solver to the final confidence. Empirically, this two-step approach yields more robust predictions given limited training data compared to a one-step approach where all solver features are fed directly into a single classification step." ], [ "This section describes our precise experimental methodology followed by our results." ], [ "In the experimental results reported below, we omitted questions that utilized diagrams. While these questions are frequent in the test, they are outside of our focus on language and reasoning. Moreover, the diagrams are highly varied (see Figure FIGREF22) and despite work that tackled narrow diagram types, e.g, food chains (BID42), overall progress has been quite limited (BID43).", "We also omitted questions that require a direct answer (rather than selecting from multiple choices), for two reasons. First, after removing questions with diagrams, they are rare in the remainder. Of the 482 direct answer questions over 13 years of Regents 8th Grade Science exams, only 38 ($<$8%) do not involve a diagram. Second, they are complex, often requiring explanation and synthesis. Both diagram and direct-answer questions are natural topics for future work." ], [ "We evaluate Aristo using several datasets of independently-authored science questions taken from standardized tests. Each dataset is divided into train, development, and test partitions, the test partitions being “blind”, i.e., hidden to both the researchers and the Aristo system during training. All questions are taken verbatim from the original sources, with no rewording or modification. As mentioned earlier, we use only the non-diagram, multiple choice (NDMC) questions. We exclude questions with an associated diagram that is required to interpret the question. In the occasional case where two questions share the same preamble, the preamble is repeated for each question so they are independent. The Aristo solvers are trained using questions in the training partition (each solver is trained independently, as described earlier), and then the combination is fine-tuned using the development set.", "The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in (BID13). The datasets are publicly available. Dataset sizes are shown in Table TABREF34. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 ($<$0.5%) being 3- or 5-way. A random score over the entire dataset is 25.02%.", "For each question, the answer option with the highest overall confidence from Aristo's combination module is selected, scoring 1 point if the answer is correct, 0 otherwise. In the (very rare) case of N options having the same confidence (an N-way tie) that includes the correct option, the system receives 1/N points (equivalent to the asymptote of random guessing between the N)." ], [ "The results are summarized in Table TABREF33, showing the performance of the solvers individually, and their combination in the full Aristo system. Note that Aristo is a single system run on the five datasets (not retuned for each dataset in turn).", "Most notably, Aristo's scores on the Regents Exams far exceed earlier performances (e.g., BID0;BID25), and represents a new high-point on science questions.", "In addition, the results show the dramatic impact of new language modeling technology, embodied in AristoBERT and AristoRoBERTa, the scores for these two solvers dominating the performance of the overall system. Even on the ARC-Challenge questions, containing a wide variety of difficult questions, the language modeling based solvers dominate. The general increasing trend of solver scores from left to right in the table loosely reflects the progression of the NLP field over the six years of the project.", "To check that we have not overfit to our data, we also ran Aristo on the most recent years of the Regents Grade Exams (4th and 8th Grade), years 2017-19, that were unavailable at the start of the project and were not part of our datasets. The results are shown in Table TABREF42, a showing score similar to those on our larger datasets, suggesting the system is not overfit.", "On the entire exam, the NY State Education Department considers a score of 65% as “Meeting the Standards”, and over 85% as “Meeting the Standards with Distinction”. If this rubric applies equally to the NDMC subset we have studied, this would mean Aristo has met the standard with distinction in 8th Grade Science." ], [ "Several authors have observed that for some multiple choice datasets, systems can still perform well even when ignoring the question body and looking only at the answer options (BID44;BID45). This surprising result is particularly true for crowdsourced datasets, where workers may use stock words or phrases (e.g., “not”) in incorrect answer options that gives them away. A dataset with this characteristic is clearly problematic, as systems can spot such cues and do well without even reading the question.", "To measure this phenomenon on our datasets, we trained and tested a new AristoRoBERTa model giving it only the answer options (no question body nor retrieved knowledge). The results on the test partition are shown in Table TABREF44. We find scores significantly above random (25%), in particular for the 12th Grade set which has longer answers. But the scores are sufficiently low to indicate the datasets are relatively free of annotation artifacts that would allow the system to often guess the answer independent of the question. This desirable feature is likely due to the fact these are natural science questions, carefully crafted by experts for inclusion in exams, rather than mass-produced through crowdsourcing." ], [ "One way of testing robustness in multiple choice is to change or add incorrect answer options, and see if the system's performance degrades (BID26). If a system has mastery of the material, we would expect its score to be relatively unaffected by such modifications. To explore this, we investigated adversarially adding extra incorrect options, i.e., searching for answer options that might confuse the system, using AristoRoBERTa, and adding them as extra choices to the existing questions.", "To do this, for each question, we collect a large ($\\approx $ 100) number of candidate additional answer choices using the correct answers to other questions in the same dataset (and train/test split), where the top 100 are chosen by a superficial alignment score (features such as answer length and punctuation usage). We then re-rank these additional choices using AristoRoBERTa, take the top N, and add them to the original K (typically 4) choices for the question.", "If we add N=4 extra choices to the normal 4-way questions, they become 8-way multiple choice, and performance drops dramatically (over 40 percentage points), albeit unfairly as we have by definition added choices that confuse the system. We then train the model further on this 8-way adversarial dataset, a process known as inoculation (BID46). After further training, we still find a drop, but significantly less (around 10 percentage points absolute, 13.8% relative, Table TABREF45), even though many of the new distractor choices would be easy for a human to rule out.", "For example, while the solver gets the right answer to the following question:", "The condition of the air outdoors at a certain time of day is known as (A) friction (B) light (C) force (D) weather [selected, correct]", "it fails for the 8-way variant:", "The condition of the air outdoors at a certain time of day is known as (A) friction (B) light (C) force (D) weather [correct] (Q) joule (R) gradient [selected] (S) trench (T) add heat", "These results show that while Aristo performs well, it still has some blind spots that can be artificially uncovered through adversarial methods such as this." ], [ "This section describes related work on answering standardized-test questions, and on math word problems in particular. It provides an overview rather than exhaustive citations." ], [ "Standardized tests have long been proposed as challenge problems for AI (e.g., BID47;BID4;BID5;BID48), as they appear to require significant advances in AI technology while also being accessible, measurable, understandable, and motivating. Earlier work on standardized tests focused on specialized tasks, for example, SAT word analogies (BID49), GRE word antonyms (BID50), and TOEFL synonyms (BID51). More recently, there have been attempts at building systems to pass university entrance exams. Under NII's Todai project, several systems were developed for parts of the University of Tokyo Entrance Exam, including maths, physics, English, and history (BID52;BID53;BID54), although in some cases questions were modified or annotated before being given to the systems (e.g., BID55). Similarly, a smaller project worked on passing the Gaokao (China's college entrance exam) (e.g., BID56;BID57). The Todai project was reported as ended in 2016, in part because of the challenges of building a machine that could “grasp meaning in a broad spectrum” (BID58)." ], [ "Substantial progress has been achieved on math word problems. On plane geometry questions, (BID59) demonstrated an approach that achieve a 61% accuracy on SAT practice questions. The Euclid system (BID60) achieved a 43% recall and 91% precision on SAT \"closed-vocabulary\" algebra questions, a limited subset of questions that nonetheless constitutes approximately 45% of a typical math SAT exam. Closed-vocabulary questions are those that do not reference real-world situations (e.g., \"what is the largest prime smaller than 100?\" or \"Twice the product of x and y is 8. What is the square of x times y?\")", "Work on open-world math questions has continued, but results on standardized tests have not been reported and thus it is difficult to benchmark the progress relative to human performance. See Amini2019MathQATI for a recent snapshot of the state of the art, and references to the literature on this problem." ], [ "Answering science questions is a long-standing AI grand challenge (BID14;BID20). This paper reports on Aristo—the first system to achieve a score of over 90% on the non-diagram, multiple choice part of the New York Regents 8th Grade Science Exam, demonstrating that modern NLP methods can result in mastery of this task. Although Aristo only answers multiple choice questions without diagrams, and operates only in the domain of science, it nevertheless represents an important milestone towards systems that can read and understand. The momentum on this task has been remarkable, with accuracy moving from roughly 60% to over 90% in just three years. Finally, the use of independently authored questions from a standardized test allows us to benchmark AI performance relative to human students.", "Beyond the use of a broad vocabulary and scientific concepts, many of the benchmark questions intuitively appear to require reasoning to answer (e.g., Figure FIGREF19). To what extent is Aristo reasoning to answer questions? For many years in AI, reasoning was thought of as the discrete, symbolic manipulation of sentences expressed in a formally designed language (BID61;BID62). With the advent of deep learning, this notion of reasoning has shifted, with machines performing challenging tasks using neural architectures rather than explicit representation languages. Today, we do not have a sufficiently fine-grained notion of reasoning to answer this question precisely, but we can observe surprising performance on answering science questions. This suggests that the machine has indeed learned something about language and the world, and how to manipulate that knowledge, albeit neither symbolically nor discretely.", "Although an important milestone, this work is only a step on the long road toward a machine that has a deep understanding of science and achieves Paul Allen's original dream of a Digital Aristotle. A machine that has fully understood a textbook should not only be able to answer the multiple choice questions at the end of the chapter—it should also be able to generate both short and long answers to direct questions; it should be able to perform constructive tasks, e.g., designing an experiment for a particular hypothesis; it should be able to explain its answers in natural language and discuss them with a user; and it should be able to learn directly from an expert who can identify and correct the machine's misunderstandings. These are all ambitious tasks still largely beyond the current technology, but with the rapid progress happening in NLP and AI, solutions may arrive sooner than we expect." ], [ "We gratefully acknowledge the many other contributors to this work, including Niranjan Balasubramanian, Matt Gardner, Peter Jansen, Jayant Krishnamurthy, Souvik Kundu, Todor Mihaylov, Harsh Trivedi, Peter Turney, and the Beaker team at AI2." ] ], "section_name": [ "Introduction", "Introduction ::: The Turing Test versus Standardized Tests", "Introduction ::: Natural Language Processing", "Introduction ::: Machine Understanding of Textbooks", "A Brief History of Aristo", "The Aristo System", "The Aristo System ::: Overview", "The Aristo System ::: Information Retrieval and Statistics", "The Aristo System ::: Reasoning Methods", "The Aristo System ::: Large-Scale Language models", "The Aristo System ::: Ensembling", "Experiments and Results", "Experiments and Results ::: Experimental Methodology ::: Omitted Question Classes", "Experiments and Results ::: Experimental Methodology ::: Dataset Formulation", "Experiments and Results ::: Main Results", "Experiments and Results ::: Answer Only Performance", "Experiments and Results ::: Adversarial Answer Options", "Related Work", "Related Work ::: Standardized Tests", "Related Work ::: Math Word Problems", "Summary and Conclusion", "Summary and Conclusion ::: Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "5a770b359fa76d55d4cb3c238b4614e73e03f539" ], "answer": [ { "evidence": [ "The current configuration of Aristo comprises of eight solvers, described shortly, each of which attempts to answer a multiple choice question. To study particular phenomena and develop solvers, the project has created larger datasets to amplify and study different problems, resulting in 10 new datasets and 5 large knowledge resources for the community.", "The solvers can be loosely grouped into:", "Statistical and information retrieval methods", "Reasoning methods", "Large-scale language model methods", "Over the life of the project, the relative importance of the methods has shifted towards large-scale language methods.", "The field of NLP has advanced substantially with the advent of large-scale language models such as ELMo (BID6), ULMFit (BID37), GPT (BID38), BERT (BID7), and RoBERTa (BID8). These models are trained to perform various language prediction tasks such as predicting a missing word or the next sentence, using large amounts of text (e.g., BERT was trained on Wikipedia + the Google Book Corpus of 10,000 books). They can also be fine-tuned to new language prediction tasks, such as question-answering, and have been remarkably successful in the few months that they have been available.", "We apply BERT to multiple choice questions by treating the task as classification: Given a question $q$ with answer options $a_{i}$ and optional background knowledge $K_{i}$, we provide it to BERT as:", "[CLS] $K_i$ [SEP] $q$ [SEP] $a_{i}$ [SEP]", "The AristoBERT solver uses three methods to apply BERT more effectively. First, we retrieve and supply background knowledge along with the question when using BERT. This provides the potential for BERT to “read” that background knowledge and apply it to the question, although the exact nature of how it uses background knowledge is more complex and less interpretable. Second, we fine-tune BERT using a curriculum of several datasets, including some that are not science related. Finally, we ensemble different variants of BERT together." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The current configuration of Aristo comprises of eight solvers, described shortly, each of which attempts to answer a multiple choice question. To study particular phenomena and develop solvers, the project has created larger datasets to amplify and study different problems, resulting in 10 new datasets and 5 large knowledge resources for the community.\n\nThe solvers can be loosely grouped into:\n\nStatistical and information retrieval methods\n\nReasoning methods\n\nLarge-scale language model methods", "Over the life of the project, the relative importance of the methods has shifted towards large-scale language methods.", "The field of NLP has advanced substantially with the advent of large-scale language models such as ELMo (BID6), ULMFit (BID37), GPT (BID38), BERT (BID7), and RoBERTa (BID8). These models are trained to perform various language prediction tasks such as predicting a missing word or the next sentence, using large amounts of text (e.g., BERT was trained on Wikipedia + the Google Book Corpus of 10,000 books). They can also be fine-tuned to new language prediction tasks, such as question-answering, and have been remarkably successful in the few months that they have been available.", "We apply BERT to multiple choice questions by treating the task as classification: Given a question $q$ with answer options $a_{i}$ and optional background knowledge $K_{i}$, we provide it to BERT as:\n\n[CLS] $K_i$ [SEP] $q$ [SEP] $a_{i}$ [SEP]", "The AristoBERT solver uses three methods to apply BERT more effectively. First, we retrieve and supply background knowledge along with the question when using BERT. This provides the potential for BERT to “read” that background knowledge and apply it to the question, although the exact nature of how it uses background knowledge is more complex and less interpretable. Second, we fine-tune BERT using a curriculum of several datasets, including some that are not science related. Finally, we ensemble different variants of BERT together." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "593c2c8574319e88ea77d8c10f2af9096d12f88b" ], "answer": [ { "evidence": [ "Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus ($5 \\times 10^{10}$ tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts (BID25).", "The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in (BID13). The datasets are publicly available. Dataset sizes are shown in Table TABREF34. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 ($<$0.5%) being 3- or 5-way. A random score over the entire dataset is 25.02%.", "FLOAT SELECTED: Table 3: Dataset partition sizes (number of questions)." ], "extractive_spans": [], "free_form_answer": "Aristo Corpus\nRegents 4th\nRegents 8th\nRegents `12th\nARC-Easy\nARC-challenge ", "highlighted_evidence": [ "Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus ($5 \\times 10^{10}$ tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts (BID25).", "The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in (BID13). The datasets are publicly available. Dataset sizes are shown in Table TABREF34. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 ($<$0.5%) being 3- or 5-way. A random score over the entire dataset is 25.02%.", "FLOAT SELECTED: Table 3: Dataset partition sizes (number of questions)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "zero", "zero" ], "paper_read": [ "no", "no" ], "question": [ "Is Aristo just some modern NLP model (ex. BERT) finetuned od data specific for this task?", "On what dataset is Aristo system trained?" ], "question_id": [ "48cf360a7753a23342f53f116eeccc2014bcc8eb", "384d571e4017628ebb72f3debb2846efaf0cb0cb" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "landmark", "landmark" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 2: Aristo’s scores on Regents 8th Grade Science (non-diagram, multiple choice) over time (held-out test set).", "Figure 3: The Tuple Inference Solver retrieves tuples relevant to the question, and constructs a support graph for each answer option. Here, the support graph for the choice “(A) Moon” is shown. The tuple facts “...Moon reflect light...”, “...Moon is a ...satellite”, and “Moon orbits planets” all support this answer, addressing different parts of the question. This support graph is scored highest, hence option “(A) Moon” is chosen.", "Figure 4: Multee retrieves potentially relevant sentences, then for each answer option in turn, assesses the degree to which each sentence entails that answer. A multi-layered aggregator then combines this (weighted) evidence from each sentence. In this case, the strongest overall support is found for option “(C) table salt”, so it is selected.", "Table 1: Examples of linguistic and semantic gaps between knowledge Ki (left) and question Qi (right) that need to be bridged for answering qualitative questions.", "Figure 5: Given a question about a qualitative relationship (How does one increase/decrease affect another?), the qualitative reasoning solver retrieves a relevant qualitative rule from a large database. It then assesses which answer option is best implied by that rule. In this case, as the rule states more heat implies faster movement, option “(C)... move more rapidly” is scored highest and selected, including recognizing that “heat” and “melted”, and “faster” and “more rapidly” align.", "Figure 6: A sample of the wide variety of diagrams used in the Regents exams, including food chains, pictures, tables, graphs, circuits, maps, temporal processes, cross-sections, pie charts, and flow diagrams.", "Table 2: This table shows the results of each of the Aristo solvers, as well as the overall Aristo system, on each of the test sets. Most notably, Aristo achieves 91.6% accuracy in 8th Grade, and exceeds 83% in 12th Grade. (“Num Q” refers to the number of questions in each test set.). Note that Aristo is a single system, run unchanged on each dataset (not retuned for each dataset).", "Table 3: Dataset partition sizes (number of questions).", "Table 4: Aristo’s score on the three most recent years of Regents Science (2017-19), not part of the hidden benchmark.", "Table 5: Scores when looking at the answer options only for (retrained) AristoRoBERTa (no ensembling), compared with using the full questions. The (desirably) low scores/large drops indicate it is hard to guess the answer without reading the question.", "Table 6: Scores on the original 4-way multiple choice questions, and (after retraining) on adversarially generated 8-way multiple choice versions, for AristoRoBERTa (no ensembling)." ], "file": [ "3-Figure2-1.png", "4-Figure3-1.png", "4-Figure4-1.png", "5-Table1-1.png", "5-Figure5-1.png", "6-Figure6-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png" ] }
[ "On what dataset is Aristo system trained?" ]
[ [ "1909.01958-The Aristo System ::: Overview-6", "1909.01958-7-Table3-1.png", "1909.01958-Experiments and Results ::: Experimental Methodology ::: Dataset Formulation-1" ] ]
[ "Aristo Corpus\nRegents 4th\nRegents 8th\nRegents `12th\nARC-Easy\nARC-challenge " ]
600