id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
6548db45fc28e8a8b51f114635bad14a13eaec5b | 6548db45fc28e8a8b51f114635bad14a13eaec5b_0 | Q: Which GAN do they use?
Text: Introduction
Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).
The GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (Hochreiter and Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).
To the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs (Liu and Tuzel, 2016; Mirza and Osindero, 2014) have been proposed, but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word embeddings from each corpus, such as the word2vec embeddings (Mikolov et al., 2013), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?
For the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both cross-corpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of the artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.
Section 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper.
Literature Review
In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0
With sufficient training, it is shown in Goodfellow et al. (2014) that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 , i.e. INLINEFORM2 .
Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator. In a Wasserstein GAN, we consider the following minimax problem DISPLAYFORM0
These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones. We note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. We summarize such GANs because a corpus can be potentially treated as a class. The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task.
For instance, the CoGAN (Liu and Tuzel, 2016) considers pairs of data examples from different categories as follows INLINEFORM0
where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 (i.e. close to INLINEFORM2 ) are tied. Mirza and Osindero (2014) proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 . While these GANs generate samples resembling different classes, other variations of GANs apply the class labels for semi-supervised learning. For instance, Salimans et al. (2016) proposed the following objective DISPLAYFORM0
where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 -th artificial class. Similar models can be found in Odena (2016), the CatGAN in Springenberg (2016), and the LSGAN in Mao et al. (2017). However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.
For generating real text, Zhang et al. (2016) proposed textGAN in which the generator has the following form, DISPLAYFORM0
where INLINEFORM0 is the noise vector, INLINEFORM1 is the generated sentence, INLINEFORM2 are the words, and INLINEFORM3 . A uni-dimensional convolutional neural network (Collobert et al, 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. With textGAN, sentences such as “we show the efficacy of our new solvers, making it up to identify the optimal random vector...” can be generated. Similar models can also be found in Wang et al. (2016), Press et al. (2017), and Rajeswar et al. (2017). The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.
For generating bag-of-words embeddings of text, Glover (2016) proposed the following model DISPLAYFORM0
and INLINEFORM0 is the mean squared error of a de-noising autoencoder, and INLINEFORM1 is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.
For extracting word embeddings given text data, Mikolov et al. (2013) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013b), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2013), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le and Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013).
Models and Algorithms
Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”
weGAN: Training cross-corpus word embeddings
We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .
Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 , we define its document embedding with INLINEFORM3 as follows, DISPLAYFORM0
where INLINEFORM0 can be any mapping. Similarly, we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows, with INLINEFORM3 trainable DISPLAYFORM0
In a typical example, word embeddings would be based on word2vec or GLoVe. Function INLINEFORM0 can be based on tf-idf, i.e. INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 -th word in the INLINEFORM4 -th corpus INLINEFORM5 and INLINEFORM6 is the tf-idf representation of the INLINEFORM7 -th document INLINEFORM8 in the INLINEFORM9 -th corpus INLINEFORM10 .
To train the GAN model, we consider the following minimax problem DISPLAYFORM0
where INLINEFORM0 is a discriminator of whether a document is original or artificial. Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 , and INLINEFORM4 is a unit vector with only the INLINEFORM5 -th component being one and all other components being zeros. Note that INLINEFORM6 is equivalent to INLINEFORM7 , but we use the former notation due to its brevity.
The intuition of problem (8) is explained as follows. First we consider a discriminator INLINEFORM0 which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 . Discriminator INLINEFORM3 minimizes this classification error, i.e. it maximizes the log-likelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1. This corresponds to DISPLAYFORM0
For the generator INLINEFORM0 , we wish to minimize (8) against INLINEFORM1 so that we can apply the minimax strategy, and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 . Meanwhile, we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes, and associates INLINEFORM6 with label INLINEFORM7 , so that the generator INLINEFORM8 can learn from the document labeling in a semi-supervised way.
If the classifier INLINEFORM0 outputs a INLINEFORM1 -dimensional softmax probability vector, we minimize the following against INLINEFORM2 , which corresponds to (8) given INLINEFORM3 and INLINEFORM4 : DISPLAYFORM0
For the classifier INLINEFORM0 , we also minimize its negative log-likelihood DISPLAYFORM0
Assembling (9-11) together, we retrieve the original minimax problem (8).
We train the discriminator and the classifier, INLINEFORM0 , and the combined embeddings INLINEFORM1 according to (9-11) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.
The algorithm for weGAN is summarized in Algorithm 1, and Figure 1 illustrates the weGAN model.
Algorithm 1. Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 . Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 . Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to (9) and (11) given a mini-batch INLINEFORM7 of training examples INLINEFORM8 . Update INLINEFORM9 according to (10) given a mini-batch INLINEFORM10 of training examples INLINEFORM11 . Output INLINEFORM12 as the cross-corpus word embeddings.
deGAN: Generating document embeddings for multi-corpus text data
In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 , INLINEFORM1 . We construct INLINEFORM2 generators, INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 . As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let INLINEFORM6 . We initialize a noise vector INLINEFORM7 , where INLINEFORM8 , and INLINEFORM9 is any noise distribution.
For a generator INLINEFORM0 represented by its parameters, we first map the noise vector INLINEFORM1 to the hidden layer, which represents different topics. We consider two hidden vectors, INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus, DISPLAYFORM0
Here INLINEFORM0 represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function INLINEFORM1 , DISPLAYFORM0
To summarize, we may represent the process from noise to the document embedding as follows, DISPLAYFORM0
Given the generated document embeddings INLINEFORM0 , we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 : INLINEFORM3 INLINEFORM4
Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 . Note that when INLINEFORM3 , the discriminator part of our model is equivalent to the original GAN model.
To explain (15), first we consider the discriminator INLINEFORM0 . Because there are multiple corpora of text documents, here we consider INLINEFORM1 categories as output of INLINEFORM2 , from which categories INLINEFORM3 represent the original corpora INLINEFORM4 , and categories INLINEFORM5 represent the generated document embeddings (e.g. bag-of-words) from INLINEFORM6 . Assume the discriminator INLINEFORM7 , a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the log-likelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0
Such a classifier does not only classifies text documents into different categories, but also considers INLINEFORM0 “fake” categories from the generators. When training the generators INLINEFORM1 , we minimize the following which makes a comparison between the INLINEFORM2 -th and INLINEFORM3 -th categories DISPLAYFORM0
The intuition of (17) is that for each generated document embedding INLINEFORM0 , we need to decrease INLINEFORM1 , which is the probability of the generated embedding being correctly classified, and increase INLINEFORM2 , which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 . The ratio in (17) reflects these two properties.
We iteratively train (16) and (17) until the classification error of INLINEFORM0 becomes stable. The algorithm for deGAN is summarized in Algorithm 2, and Figure 2 illustrates the deGAN model..
Algorithm 2. Randomly initialize the weights of INLINEFORM0 . Initialize the discriminator INLINEFORM1 with the weights of the first layer (which takes document embeddings as the input) initialized by word embeddings, and other parameters randomly initialized. Until maximum number of iterations reached Update INLINEFORM2 according to (16) given a mini-batch of training examples INLINEFORM3 and samples from noise INLINEFORM4 . Update INLINEFORM5 according to (17) given a mini-batch of training examples INLINEFORM6 and samples form noise INLINEFORM7 . Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier.
We next show that from (15), the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario.
Proposition 1. Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 ; INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous; and that INLINEFORM6 are solutions to (15). Then INLINEFORM7 , the probability density of the document embeddings from INLINEFORM8 , INLINEFORM9 , are equal to INLINEFORM10 .
Proof. Since INLINEFORM0 is bounded, all of the integrals exhibited next are well-defined and finite. Since INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are continuous, it follows that for any parameters, INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support.
From the first line of (15), INLINEFORM0
This problem reduces to INLINEFORM0 subject to INLINEFORM1 , the solution of which is INLINEFORM2 , INLINEFORM3 . Therefore, the solution to (18) is DISPLAYFORM0
We then obtain from the second line of (15) that INLINEFORM0
From non-negativity of the Kullback-Leibler divergence, we conclude that INLINEFORM0
Experiments
In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set.
The document embedding in weGAN is the tf-idf weighted word embedding transformed by the INLINEFORM0 activation, i.e. DISPLAYFORM0
For deGAN, we use INLINEFORM0 -normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).
For weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes. All these parameters have been optimized. For the labels INLINEFORM4 in (8), we apply corpus membership of each document.
For the noise distribution INLINEFORM0 for deGAN, we apply the uniform distribution INLINEFORM1 . In (14) for deGAN, INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 -normalized tf-idf vectors for each document. For the discriminator INLINEFORM5 of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator INLINEFORM6 , we apply a learning rate of INLINEFORM7 , and for the generator INLINEFORM8 , we apply a learning rate of INLINEFORM9 , because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.
For weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch.
The CNN data set
In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics.
Meanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.
We observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.
We next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the INLINEFORM0 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.
To compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.
From Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”
We also perform dimensional reduction using t-SNE (van der Maaten and Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.” The results are shown in Figure 3.
We observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval).
The TIME data set
In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
Table 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the INLINEFORM0 level.
From Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6.
In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.
For deGAN, we also compare the original and artificial samples in terms of the highest probability words. Table 7 shows one record for each category.
From Table 7, we observe that the produced bag-of-words are generally alike, and the words in the same sample are related to each other to some extent.
We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. We observe that the points are generated mixed but deGAN cannot reproduce the outliers.
The 20 Newsgroups data set
The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM4 level. The other results are similar to the previous two data sets and are thereby omitted here.
The Reuters-21578 data set
The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into INLINEFORM0 training documents, from which 692 validation documents are held out, and INLINEFORM1 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM2 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here.
Conclusion
In this paper, we have demonstrated the application of the GAN model on text data with multiple corpora. We have shown that the GAN model is not only able to generate images, but also able to refine word embeddings and generate document embeddings. Such models can better learn the inner structure of multi-corpus text data, and also benefit supervised learning. The improvements in supervised learning are not large but statistically significant. The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets, and is thereby recommended. The synonyms from weGAN also tend to be more relevant than the original word2vec model. The t-SNE plots show that our generated document embeddings are similarly distributed as the original ones.
Reference
M. Arjovsky, S. Chintala, and L. Bottou. (2017). Wasserstein GAN. arXiv:1701.07875.
D. Blei, A. Ng, and M. Jordan. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.
R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research. 12:2493-2537.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014).
J. Glover. (2016). Modeling documents with Generative Adversarial Networks. In Workshop on Adversarial Training (NIPS 2016).
S. Hochreiter and J. Schmidhuber. (1997). Long Short-term Memory. In Neural Computation, 9:1735-1780.
Y. Kim. Convolutional Neural Networks for Sentence Classification. (2014). In The 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).
Q. Le and T. Mikolov. (2014). Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014).
J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. (2017). Adversarial Learning for Neural Dialogue Generation. arXiv:1701.06547.
M.-Y. Liu, and O. Tuzel. (2016). Coupled Generative Adversarial Networks. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley. (2017). Least Squares Generative Adversarial Networks. arXiv:1611.04076.
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. (2013). Distributed Embeddings of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013).
T. Mikolov, K. Chen, G. Corrado, and J. Dean. (2013b). Efficient Estimation of Word Representations in Vector Space. In Workshop (ICLR 2013).
M. Mirza, S. Osindero. (2014). Conditional Generative Adversarial Nets. arXiv:1411.1784.
A. Odena. (2016). Semi-supervised Learning with Generative Adversarial Networks. arXiv:1606. 01583.
J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. (2014). In Empirical Methods in Natural Language Processing (EMNLP 2014).
O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. (2017). Language Generation with Recurrent Generative Adversarial Networks without Pre-training. In 1st Workshop on Subword and Character level models in NLP (EMNLP 2017).
S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. (2017). Adversarial Generation of Natural Language. arXiv:1705.10929.
W. Rand. (1971). Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66:846-850.
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
R. Socher, A. Perelygin, Alex, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).
J. Springenberg. (2016). Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. In 4th International Conference on Learning embeddings (ICLR 2016).
L. van der Maaten, and G. Hinton. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.
B. Wang, K. Liu, and J. Zhao. (2016). Conditional Generative Adversarial Networks for Commonsense Machine Comprehension. In Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).
Y. Zhang, Z. Gan, and L. Carin. (2016). Generating Text via Adversarial Training. In Workshop on Adversarial Training (NIPS 2016).
J. Zhao, M. Mathieu, and Y. LeCun. (2017). Energy-based Generative Adversarial Networks. In 5th International Conference on Learning embeddings (ICLR 2017). | We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . |
6548db45fc28e8a8b51f114635bad14a13eaec5b | 6548db45fc28e8a8b51f114635bad14a13eaec5b_1 | Q: Which GAN do they use?
Text: Introduction
Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).
The GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (Hochreiter and Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).
To the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs (Liu and Tuzel, 2016; Mirza and Osindero, 2014) have been proposed, but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word embeddings from each corpus, such as the word2vec embeddings (Mikolov et al., 2013), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?
For the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both cross-corpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of the artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.
Section 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper.
Literature Review
In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0
With sufficient training, it is shown in Goodfellow et al. (2014) that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 , i.e. INLINEFORM2 .
Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator. In a Wasserstein GAN, we consider the following minimax problem DISPLAYFORM0
These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones. We note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. We summarize such GANs because a corpus can be potentially treated as a class. The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task.
For instance, the CoGAN (Liu and Tuzel, 2016) considers pairs of data examples from different categories as follows INLINEFORM0
where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 (i.e. close to INLINEFORM2 ) are tied. Mirza and Osindero (2014) proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 . While these GANs generate samples resembling different classes, other variations of GANs apply the class labels for semi-supervised learning. For instance, Salimans et al. (2016) proposed the following objective DISPLAYFORM0
where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 -th artificial class. Similar models can be found in Odena (2016), the CatGAN in Springenberg (2016), and the LSGAN in Mao et al. (2017). However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.
For generating real text, Zhang et al. (2016) proposed textGAN in which the generator has the following form, DISPLAYFORM0
where INLINEFORM0 is the noise vector, INLINEFORM1 is the generated sentence, INLINEFORM2 are the words, and INLINEFORM3 . A uni-dimensional convolutional neural network (Collobert et al, 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. With textGAN, sentences such as “we show the efficacy of our new solvers, making it up to identify the optimal random vector...” can be generated. Similar models can also be found in Wang et al. (2016), Press et al. (2017), and Rajeswar et al. (2017). The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.
For generating bag-of-words embeddings of text, Glover (2016) proposed the following model DISPLAYFORM0
and INLINEFORM0 is the mean squared error of a de-noising autoencoder, and INLINEFORM1 is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.
For extracting word embeddings given text data, Mikolov et al. (2013) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013b), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2013), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le and Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013).
Models and Algorithms
Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”
weGAN: Training cross-corpus word embeddings
We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .
Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 , we define its document embedding with INLINEFORM3 as follows, DISPLAYFORM0
where INLINEFORM0 can be any mapping. Similarly, we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows, with INLINEFORM3 trainable DISPLAYFORM0
In a typical example, word embeddings would be based on word2vec or GLoVe. Function INLINEFORM0 can be based on tf-idf, i.e. INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 -th word in the INLINEFORM4 -th corpus INLINEFORM5 and INLINEFORM6 is the tf-idf representation of the INLINEFORM7 -th document INLINEFORM8 in the INLINEFORM9 -th corpus INLINEFORM10 .
To train the GAN model, we consider the following minimax problem DISPLAYFORM0
where INLINEFORM0 is a discriminator of whether a document is original or artificial. Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 , and INLINEFORM4 is a unit vector with only the INLINEFORM5 -th component being one and all other components being zeros. Note that INLINEFORM6 is equivalent to INLINEFORM7 , but we use the former notation due to its brevity.
The intuition of problem (8) is explained as follows. First we consider a discriminator INLINEFORM0 which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 . Discriminator INLINEFORM3 minimizes this classification error, i.e. it maximizes the log-likelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1. This corresponds to DISPLAYFORM0
For the generator INLINEFORM0 , we wish to minimize (8) against INLINEFORM1 so that we can apply the minimax strategy, and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 . Meanwhile, we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes, and associates INLINEFORM6 with label INLINEFORM7 , so that the generator INLINEFORM8 can learn from the document labeling in a semi-supervised way.
If the classifier INLINEFORM0 outputs a INLINEFORM1 -dimensional softmax probability vector, we minimize the following against INLINEFORM2 , which corresponds to (8) given INLINEFORM3 and INLINEFORM4 : DISPLAYFORM0
For the classifier INLINEFORM0 , we also minimize its negative log-likelihood DISPLAYFORM0
Assembling (9-11) together, we retrieve the original minimax problem (8).
We train the discriminator and the classifier, INLINEFORM0 , and the combined embeddings INLINEFORM1 according to (9-11) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.
The algorithm for weGAN is summarized in Algorithm 1, and Figure 1 illustrates the weGAN model.
Algorithm 1. Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 . Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 . Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to (9) and (11) given a mini-batch INLINEFORM7 of training examples INLINEFORM8 . Update INLINEFORM9 according to (10) given a mini-batch INLINEFORM10 of training examples INLINEFORM11 . Output INLINEFORM12 as the cross-corpus word embeddings.
deGAN: Generating document embeddings for multi-corpus text data
In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 , INLINEFORM1 . We construct INLINEFORM2 generators, INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 . As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let INLINEFORM6 . We initialize a noise vector INLINEFORM7 , where INLINEFORM8 , and INLINEFORM9 is any noise distribution.
For a generator INLINEFORM0 represented by its parameters, we first map the noise vector INLINEFORM1 to the hidden layer, which represents different topics. We consider two hidden vectors, INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus, DISPLAYFORM0
Here INLINEFORM0 represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function INLINEFORM1 , DISPLAYFORM0
To summarize, we may represent the process from noise to the document embedding as follows, DISPLAYFORM0
Given the generated document embeddings INLINEFORM0 , we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 : INLINEFORM3 INLINEFORM4
Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 . Note that when INLINEFORM3 , the discriminator part of our model is equivalent to the original GAN model.
To explain (15), first we consider the discriminator INLINEFORM0 . Because there are multiple corpora of text documents, here we consider INLINEFORM1 categories as output of INLINEFORM2 , from which categories INLINEFORM3 represent the original corpora INLINEFORM4 , and categories INLINEFORM5 represent the generated document embeddings (e.g. bag-of-words) from INLINEFORM6 . Assume the discriminator INLINEFORM7 , a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the log-likelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0
Such a classifier does not only classifies text documents into different categories, but also considers INLINEFORM0 “fake” categories from the generators. When training the generators INLINEFORM1 , we minimize the following which makes a comparison between the INLINEFORM2 -th and INLINEFORM3 -th categories DISPLAYFORM0
The intuition of (17) is that for each generated document embedding INLINEFORM0 , we need to decrease INLINEFORM1 , which is the probability of the generated embedding being correctly classified, and increase INLINEFORM2 , which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 . The ratio in (17) reflects these two properties.
We iteratively train (16) and (17) until the classification error of INLINEFORM0 becomes stable. The algorithm for deGAN is summarized in Algorithm 2, and Figure 2 illustrates the deGAN model..
Algorithm 2. Randomly initialize the weights of INLINEFORM0 . Initialize the discriminator INLINEFORM1 with the weights of the first layer (which takes document embeddings as the input) initialized by word embeddings, and other parameters randomly initialized. Until maximum number of iterations reached Update INLINEFORM2 according to (16) given a mini-batch of training examples INLINEFORM3 and samples from noise INLINEFORM4 . Update INLINEFORM5 according to (17) given a mini-batch of training examples INLINEFORM6 and samples form noise INLINEFORM7 . Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier.
We next show that from (15), the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario.
Proposition 1. Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 ; INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous; and that INLINEFORM6 are solutions to (15). Then INLINEFORM7 , the probability density of the document embeddings from INLINEFORM8 , INLINEFORM9 , are equal to INLINEFORM10 .
Proof. Since INLINEFORM0 is bounded, all of the integrals exhibited next are well-defined and finite. Since INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are continuous, it follows that for any parameters, INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support.
From the first line of (15), INLINEFORM0
This problem reduces to INLINEFORM0 subject to INLINEFORM1 , the solution of which is INLINEFORM2 , INLINEFORM3 . Therefore, the solution to (18) is DISPLAYFORM0
We then obtain from the second line of (15) that INLINEFORM0
From non-negativity of the Kullback-Leibler divergence, we conclude that INLINEFORM0
Experiments
In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set.
The document embedding in weGAN is the tf-idf weighted word embedding transformed by the INLINEFORM0 activation, i.e. DISPLAYFORM0
For deGAN, we use INLINEFORM0 -normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).
For weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes. All these parameters have been optimized. For the labels INLINEFORM4 in (8), we apply corpus membership of each document.
For the noise distribution INLINEFORM0 for deGAN, we apply the uniform distribution INLINEFORM1 . In (14) for deGAN, INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 -normalized tf-idf vectors for each document. For the discriminator INLINEFORM5 of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator INLINEFORM6 , we apply a learning rate of INLINEFORM7 , and for the generator INLINEFORM8 , we apply a learning rate of INLINEFORM9 , because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.
For weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch.
The CNN data set
In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics.
Meanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.
We observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.
We next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the INLINEFORM0 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.
To compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.
From Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”
We also perform dimensional reduction using t-SNE (van der Maaten and Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.” The results are shown in Figure 3.
We observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval).
The TIME data set
In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
Table 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the INLINEFORM0 level.
From Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6.
In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.
For deGAN, we also compare the original and artificial samples in terms of the highest probability words. Table 7 shows one record for each category.
From Table 7, we observe that the produced bag-of-words are generally alike, and the words in the same sample are related to each other to some extent.
We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. We observe that the points are generated mixed but deGAN cannot reproduce the outliers.
The 20 Newsgroups data set
The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM4 level. The other results are similar to the previous two data sets and are thereby omitted here.
The Reuters-21578 data set
The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into INLINEFORM0 training documents, from which 692 validation documents are held out, and INLINEFORM1 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM2 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here.
Conclusion
In this paper, we have demonstrated the application of the GAN model on text data with multiple corpora. We have shown that the GAN model is not only able to generate images, but also able to refine word embeddings and generate document embeddings. Such models can better learn the inner structure of multi-corpus text data, and also benefit supervised learning. The improvements in supervised learning are not large but statistically significant. The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets, and is thereby recommended. The synonyms from weGAN also tend to be more relevant than the original word2vec model. The t-SNE plots show that our generated document embeddings are similarly distributed as the original ones.
Reference
M. Arjovsky, S. Chintala, and L. Bottou. (2017). Wasserstein GAN. arXiv:1701.07875.
D. Blei, A. Ng, and M. Jordan. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.
R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research. 12:2493-2537.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014).
J. Glover. (2016). Modeling documents with Generative Adversarial Networks. In Workshop on Adversarial Training (NIPS 2016).
S. Hochreiter and J. Schmidhuber. (1997). Long Short-term Memory. In Neural Computation, 9:1735-1780.
Y. Kim. Convolutional Neural Networks for Sentence Classification. (2014). In The 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).
Q. Le and T. Mikolov. (2014). Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014).
J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. (2017). Adversarial Learning for Neural Dialogue Generation. arXiv:1701.06547.
M.-Y. Liu, and O. Tuzel. (2016). Coupled Generative Adversarial Networks. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley. (2017). Least Squares Generative Adversarial Networks. arXiv:1611.04076.
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. (2013). Distributed Embeddings of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013).
T. Mikolov, K. Chen, G. Corrado, and J. Dean. (2013b). Efficient Estimation of Word Representations in Vector Space. In Workshop (ICLR 2013).
M. Mirza, S. Osindero. (2014). Conditional Generative Adversarial Nets. arXiv:1411.1784.
A. Odena. (2016). Semi-supervised Learning with Generative Adversarial Networks. arXiv:1606. 01583.
J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. (2014). In Empirical Methods in Natural Language Processing (EMNLP 2014).
O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. (2017). Language Generation with Recurrent Generative Adversarial Networks without Pre-training. In 1st Workshop on Subword and Character level models in NLP (EMNLP 2017).
S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. (2017). Adversarial Generation of Natural Language. arXiv:1705.10929.
W. Rand. (1971). Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66:846-850.
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
R. Socher, A. Perelygin, Alex, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).
J. Springenberg. (2016). Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. In 4th International Conference on Learning embeddings (ICLR 2016).
L. van der Maaten, and G. Hinton. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.
B. Wang, K. Liu, and J. Zhao. (2016). Conditional Generative Adversarial Networks for Commonsense Machine Comprehension. In Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).
Y. Zhang, Z. Gan, and L. Carin. (2016). Generating Text via Adversarial Training. In Workshop on Adversarial Training (NIPS 2016).
J. Zhao, M. Mathieu, and Y. LeCun. (2017). Energy-based Generative Adversarial Networks. In 5th International Conference on Learning embeddings (ICLR 2017). | weGAN, deGAN |
4c4f76837d1329835df88b0921f4fe8bda26606f | 4c4f76837d1329835df88b0921f4fe8bda26606f_0 | Q: Do they evaluate grammaticality of generated text?
Text: Introduction
Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).
The GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (Hochreiter and Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).
To the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs (Liu and Tuzel, 2016; Mirza and Osindero, 2014) have been proposed, but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word embeddings from each corpus, such as the word2vec embeddings (Mikolov et al., 2013), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?
For the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both cross-corpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of the artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.
Section 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper.
Literature Review
In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0
With sufficient training, it is shown in Goodfellow et al. (2014) that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 , i.e. INLINEFORM2 .
Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator. In a Wasserstein GAN, we consider the following minimax problem DISPLAYFORM0
These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones. We note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. We summarize such GANs because a corpus can be potentially treated as a class. The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task.
For instance, the CoGAN (Liu and Tuzel, 2016) considers pairs of data examples from different categories as follows INLINEFORM0
where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 (i.e. close to INLINEFORM2 ) are tied. Mirza and Osindero (2014) proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 . While these GANs generate samples resembling different classes, other variations of GANs apply the class labels for semi-supervised learning. For instance, Salimans et al. (2016) proposed the following objective DISPLAYFORM0
where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 -th artificial class. Similar models can be found in Odena (2016), the CatGAN in Springenberg (2016), and the LSGAN in Mao et al. (2017). However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.
For generating real text, Zhang et al. (2016) proposed textGAN in which the generator has the following form, DISPLAYFORM0
where INLINEFORM0 is the noise vector, INLINEFORM1 is the generated sentence, INLINEFORM2 are the words, and INLINEFORM3 . A uni-dimensional convolutional neural network (Collobert et al, 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. With textGAN, sentences such as “we show the efficacy of our new solvers, making it up to identify the optimal random vector...” can be generated. Similar models can also be found in Wang et al. (2016), Press et al. (2017), and Rajeswar et al. (2017). The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.
For generating bag-of-words embeddings of text, Glover (2016) proposed the following model DISPLAYFORM0
and INLINEFORM0 is the mean squared error of a de-noising autoencoder, and INLINEFORM1 is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.
For extracting word embeddings given text data, Mikolov et al. (2013) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013b), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2013), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le and Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013).
Models and Algorithms
Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”
weGAN: Training cross-corpus word embeddings
We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .
Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 , we define its document embedding with INLINEFORM3 as follows, DISPLAYFORM0
where INLINEFORM0 can be any mapping. Similarly, we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows, with INLINEFORM3 trainable DISPLAYFORM0
In a typical example, word embeddings would be based on word2vec or GLoVe. Function INLINEFORM0 can be based on tf-idf, i.e. INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 -th word in the INLINEFORM4 -th corpus INLINEFORM5 and INLINEFORM6 is the tf-idf representation of the INLINEFORM7 -th document INLINEFORM8 in the INLINEFORM9 -th corpus INLINEFORM10 .
To train the GAN model, we consider the following minimax problem DISPLAYFORM0
where INLINEFORM0 is a discriminator of whether a document is original or artificial. Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 , and INLINEFORM4 is a unit vector with only the INLINEFORM5 -th component being one and all other components being zeros. Note that INLINEFORM6 is equivalent to INLINEFORM7 , but we use the former notation due to its brevity.
The intuition of problem (8) is explained as follows. First we consider a discriminator INLINEFORM0 which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 . Discriminator INLINEFORM3 minimizes this classification error, i.e. it maximizes the log-likelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1. This corresponds to DISPLAYFORM0
For the generator INLINEFORM0 , we wish to minimize (8) against INLINEFORM1 so that we can apply the minimax strategy, and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 . Meanwhile, we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes, and associates INLINEFORM6 with label INLINEFORM7 , so that the generator INLINEFORM8 can learn from the document labeling in a semi-supervised way.
If the classifier INLINEFORM0 outputs a INLINEFORM1 -dimensional softmax probability vector, we minimize the following against INLINEFORM2 , which corresponds to (8) given INLINEFORM3 and INLINEFORM4 : DISPLAYFORM0
For the classifier INLINEFORM0 , we also minimize its negative log-likelihood DISPLAYFORM0
Assembling (9-11) together, we retrieve the original minimax problem (8).
We train the discriminator and the classifier, INLINEFORM0 , and the combined embeddings INLINEFORM1 according to (9-11) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.
The algorithm for weGAN is summarized in Algorithm 1, and Figure 1 illustrates the weGAN model.
Algorithm 1. Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 . Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 . Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to (9) and (11) given a mini-batch INLINEFORM7 of training examples INLINEFORM8 . Update INLINEFORM9 according to (10) given a mini-batch INLINEFORM10 of training examples INLINEFORM11 . Output INLINEFORM12 as the cross-corpus word embeddings.
deGAN: Generating document embeddings for multi-corpus text data
In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 , INLINEFORM1 . We construct INLINEFORM2 generators, INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 . As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let INLINEFORM6 . We initialize a noise vector INLINEFORM7 , where INLINEFORM8 , and INLINEFORM9 is any noise distribution.
For a generator INLINEFORM0 represented by its parameters, we first map the noise vector INLINEFORM1 to the hidden layer, which represents different topics. We consider two hidden vectors, INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus, DISPLAYFORM0
Here INLINEFORM0 represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function INLINEFORM1 , DISPLAYFORM0
To summarize, we may represent the process from noise to the document embedding as follows, DISPLAYFORM0
Given the generated document embeddings INLINEFORM0 , we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 : INLINEFORM3 INLINEFORM4
Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 . Note that when INLINEFORM3 , the discriminator part of our model is equivalent to the original GAN model.
To explain (15), first we consider the discriminator INLINEFORM0 . Because there are multiple corpora of text documents, here we consider INLINEFORM1 categories as output of INLINEFORM2 , from which categories INLINEFORM3 represent the original corpora INLINEFORM4 , and categories INLINEFORM5 represent the generated document embeddings (e.g. bag-of-words) from INLINEFORM6 . Assume the discriminator INLINEFORM7 , a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the log-likelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0
Such a classifier does not only classifies text documents into different categories, but also considers INLINEFORM0 “fake” categories from the generators. When training the generators INLINEFORM1 , we minimize the following which makes a comparison between the INLINEFORM2 -th and INLINEFORM3 -th categories DISPLAYFORM0
The intuition of (17) is that for each generated document embedding INLINEFORM0 , we need to decrease INLINEFORM1 , which is the probability of the generated embedding being correctly classified, and increase INLINEFORM2 , which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 . The ratio in (17) reflects these two properties.
We iteratively train (16) and (17) until the classification error of INLINEFORM0 becomes stable. The algorithm for deGAN is summarized in Algorithm 2, and Figure 2 illustrates the deGAN model..
Algorithm 2. Randomly initialize the weights of INLINEFORM0 . Initialize the discriminator INLINEFORM1 with the weights of the first layer (which takes document embeddings as the input) initialized by word embeddings, and other parameters randomly initialized. Until maximum number of iterations reached Update INLINEFORM2 according to (16) given a mini-batch of training examples INLINEFORM3 and samples from noise INLINEFORM4 . Update INLINEFORM5 according to (17) given a mini-batch of training examples INLINEFORM6 and samples form noise INLINEFORM7 . Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier.
We next show that from (15), the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario.
Proposition 1. Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 ; INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous; and that INLINEFORM6 are solutions to (15). Then INLINEFORM7 , the probability density of the document embeddings from INLINEFORM8 , INLINEFORM9 , are equal to INLINEFORM10 .
Proof. Since INLINEFORM0 is bounded, all of the integrals exhibited next are well-defined and finite. Since INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are continuous, it follows that for any parameters, INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support.
From the first line of (15), INLINEFORM0
This problem reduces to INLINEFORM0 subject to INLINEFORM1 , the solution of which is INLINEFORM2 , INLINEFORM3 . Therefore, the solution to (18) is DISPLAYFORM0
We then obtain from the second line of (15) that INLINEFORM0
From non-negativity of the Kullback-Leibler divergence, we conclude that INLINEFORM0
Experiments
In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set.
The document embedding in weGAN is the tf-idf weighted word embedding transformed by the INLINEFORM0 activation, i.e. DISPLAYFORM0
For deGAN, we use INLINEFORM0 -normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).
For weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes. All these parameters have been optimized. For the labels INLINEFORM4 in (8), we apply corpus membership of each document.
For the noise distribution INLINEFORM0 for deGAN, we apply the uniform distribution INLINEFORM1 . In (14) for deGAN, INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 -normalized tf-idf vectors for each document. For the discriminator INLINEFORM5 of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator INLINEFORM6 , we apply a learning rate of INLINEFORM7 , and for the generator INLINEFORM8 , we apply a learning rate of INLINEFORM9 , because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.
For weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch.
The CNN data set
In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics.
Meanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.
We observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.
We next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the INLINEFORM0 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.
To compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.
From Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”
We also perform dimensional reduction using t-SNE (van der Maaten and Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.” The results are shown in Figure 3.
We observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval).
The TIME data set
In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
Table 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the INLINEFORM0 level.
From Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6.
In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.
For deGAN, we also compare the original and artificial samples in terms of the highest probability words. Table 7 shows one record for each category.
From Table 7, we observe that the produced bag-of-words are generally alike, and the words in the same sample are related to each other to some extent.
We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. We observe that the points are generated mixed but deGAN cannot reproduce the outliers.
The 20 Newsgroups data set
The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM4 level. The other results are similar to the previous two data sets and are thereby omitted here.
The Reuters-21578 data set
The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into INLINEFORM0 training documents, from which 692 validation documents are held out, and INLINEFORM1 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM2 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here.
Conclusion
In this paper, we have demonstrated the application of the GAN model on text data with multiple corpora. We have shown that the GAN model is not only able to generate images, but also able to refine word embeddings and generate document embeddings. Such models can better learn the inner structure of multi-corpus text data, and also benefit supervised learning. The improvements in supervised learning are not large but statistically significant. The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets, and is thereby recommended. The synonyms from weGAN also tend to be more relevant than the original word2vec model. The t-SNE plots show that our generated document embeddings are similarly distributed as the original ones.
Reference
M. Arjovsky, S. Chintala, and L. Bottou. (2017). Wasserstein GAN. arXiv:1701.07875.
D. Blei, A. Ng, and M. Jordan. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.
R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research. 12:2493-2537.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014).
J. Glover. (2016). Modeling documents with Generative Adversarial Networks. In Workshop on Adversarial Training (NIPS 2016).
S. Hochreiter and J. Schmidhuber. (1997). Long Short-term Memory. In Neural Computation, 9:1735-1780.
Y. Kim. Convolutional Neural Networks for Sentence Classification. (2014). In The 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).
Q. Le and T. Mikolov. (2014). Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014).
J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. (2017). Adversarial Learning for Neural Dialogue Generation. arXiv:1701.06547.
M.-Y. Liu, and O. Tuzel. (2016). Coupled Generative Adversarial Networks. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley. (2017). Least Squares Generative Adversarial Networks. arXiv:1611.04076.
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. (2013). Distributed Embeddings of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013).
T. Mikolov, K. Chen, G. Corrado, and J. Dean. (2013b). Efficient Estimation of Word Representations in Vector Space. In Workshop (ICLR 2013).
M. Mirza, S. Osindero. (2014). Conditional Generative Adversarial Nets. arXiv:1411.1784.
A. Odena. (2016). Semi-supervised Learning with Generative Adversarial Networks. arXiv:1606. 01583.
J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. (2014). In Empirical Methods in Natural Language Processing (EMNLP 2014).
O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. (2017). Language Generation with Recurrent Generative Adversarial Networks without Pre-training. In 1st Workshop on Subword and Character level models in NLP (EMNLP 2017).
S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. (2017). Adversarial Generation of Natural Language. arXiv:1705.10929.
W. Rand. (1971). Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66:846-850.
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
R. Socher, A. Perelygin, Alex, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).
J. Springenberg. (2016). Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. In 4th International Conference on Learning embeddings (ICLR 2016).
L. van der Maaten, and G. Hinton. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.
B. Wang, K. Liu, and J. Zhao. (2016). Conditional Generative Adversarial Networks for Commonsense Machine Comprehension. In Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).
Y. Zhang, Z. Gan, and L. Carin. (2016). Generating Text via Adversarial Training. In Workshop on Adversarial Training (NIPS 2016).
J. Zhao, M. Mathieu, and Y. LeCun. (2017). Energy-based Generative Adversarial Networks. In 5th International Conference on Learning embeddings (ICLR 2017). | No |
819d2e97f54afcc7cdb3d894a072bcadfba9b747 | 819d2e97f54afcc7cdb3d894a072bcadfba9b747_0 | Q: Which corpora do they use?
Text: Introduction
Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).
The GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (Hochreiter and Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).
To the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs (Liu and Tuzel, 2016; Mirza and Osindero, 2014) have been proposed, but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word embeddings from each corpus, such as the word2vec embeddings (Mikolov et al., 2013), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?
For the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both cross-corpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of the artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.
Section 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper.
Literature Review
In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0
With sufficient training, it is shown in Goodfellow et al. (2014) that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 , i.e. INLINEFORM2 .
Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator. In a Wasserstein GAN, we consider the following minimax problem DISPLAYFORM0
These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones. We note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. We summarize such GANs because a corpus can be potentially treated as a class. The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task.
For instance, the CoGAN (Liu and Tuzel, 2016) considers pairs of data examples from different categories as follows INLINEFORM0
where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 (i.e. close to INLINEFORM2 ) are tied. Mirza and Osindero (2014) proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 . While these GANs generate samples resembling different classes, other variations of GANs apply the class labels for semi-supervised learning. For instance, Salimans et al. (2016) proposed the following objective DISPLAYFORM0
where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 -th artificial class. Similar models can be found in Odena (2016), the CatGAN in Springenberg (2016), and the LSGAN in Mao et al. (2017). However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.
For generating real text, Zhang et al. (2016) proposed textGAN in which the generator has the following form, DISPLAYFORM0
where INLINEFORM0 is the noise vector, INLINEFORM1 is the generated sentence, INLINEFORM2 are the words, and INLINEFORM3 . A uni-dimensional convolutional neural network (Collobert et al, 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. With textGAN, sentences such as “we show the efficacy of our new solvers, making it up to identify the optimal random vector...” can be generated. Similar models can also be found in Wang et al. (2016), Press et al. (2017), and Rajeswar et al. (2017). The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.
For generating bag-of-words embeddings of text, Glover (2016) proposed the following model DISPLAYFORM0
and INLINEFORM0 is the mean squared error of a de-noising autoencoder, and INLINEFORM1 is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.
For extracting word embeddings given text data, Mikolov et al. (2013) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013b), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2013), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le and Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013).
Models and Algorithms
Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”
weGAN: Training cross-corpus word embeddings
We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .
Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 , we define its document embedding with INLINEFORM3 as follows, DISPLAYFORM0
where INLINEFORM0 can be any mapping. Similarly, we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows, with INLINEFORM3 trainable DISPLAYFORM0
In a typical example, word embeddings would be based on word2vec or GLoVe. Function INLINEFORM0 can be based on tf-idf, i.e. INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 -th word in the INLINEFORM4 -th corpus INLINEFORM5 and INLINEFORM6 is the tf-idf representation of the INLINEFORM7 -th document INLINEFORM8 in the INLINEFORM9 -th corpus INLINEFORM10 .
To train the GAN model, we consider the following minimax problem DISPLAYFORM0
where INLINEFORM0 is a discriminator of whether a document is original or artificial. Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 , and INLINEFORM4 is a unit vector with only the INLINEFORM5 -th component being one and all other components being zeros. Note that INLINEFORM6 is equivalent to INLINEFORM7 , but we use the former notation due to its brevity.
The intuition of problem (8) is explained as follows. First we consider a discriminator INLINEFORM0 which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 . Discriminator INLINEFORM3 minimizes this classification error, i.e. it maximizes the log-likelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1. This corresponds to DISPLAYFORM0
For the generator INLINEFORM0 , we wish to minimize (8) against INLINEFORM1 so that we can apply the minimax strategy, and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 . Meanwhile, we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes, and associates INLINEFORM6 with label INLINEFORM7 , so that the generator INLINEFORM8 can learn from the document labeling in a semi-supervised way.
If the classifier INLINEFORM0 outputs a INLINEFORM1 -dimensional softmax probability vector, we minimize the following against INLINEFORM2 , which corresponds to (8) given INLINEFORM3 and INLINEFORM4 : DISPLAYFORM0
For the classifier INLINEFORM0 , we also minimize its negative log-likelihood DISPLAYFORM0
Assembling (9-11) together, we retrieve the original minimax problem (8).
We train the discriminator and the classifier, INLINEFORM0 , and the combined embeddings INLINEFORM1 according to (9-11) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.
The algorithm for weGAN is summarized in Algorithm 1, and Figure 1 illustrates the weGAN model.
Algorithm 1. Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 . Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 . Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to (9) and (11) given a mini-batch INLINEFORM7 of training examples INLINEFORM8 . Update INLINEFORM9 according to (10) given a mini-batch INLINEFORM10 of training examples INLINEFORM11 . Output INLINEFORM12 as the cross-corpus word embeddings.
deGAN: Generating document embeddings for multi-corpus text data
In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 , INLINEFORM1 . We construct INLINEFORM2 generators, INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 . As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let INLINEFORM6 . We initialize a noise vector INLINEFORM7 , where INLINEFORM8 , and INLINEFORM9 is any noise distribution.
For a generator INLINEFORM0 represented by its parameters, we first map the noise vector INLINEFORM1 to the hidden layer, which represents different topics. We consider two hidden vectors, INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus, DISPLAYFORM0
Here INLINEFORM0 represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function INLINEFORM1 , DISPLAYFORM0
To summarize, we may represent the process from noise to the document embedding as follows, DISPLAYFORM0
Given the generated document embeddings INLINEFORM0 , we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 : INLINEFORM3 INLINEFORM4
Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 . Note that when INLINEFORM3 , the discriminator part of our model is equivalent to the original GAN model.
To explain (15), first we consider the discriminator INLINEFORM0 . Because there are multiple corpora of text documents, here we consider INLINEFORM1 categories as output of INLINEFORM2 , from which categories INLINEFORM3 represent the original corpora INLINEFORM4 , and categories INLINEFORM5 represent the generated document embeddings (e.g. bag-of-words) from INLINEFORM6 . Assume the discriminator INLINEFORM7 , a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the log-likelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0
Such a classifier does not only classifies text documents into different categories, but also considers INLINEFORM0 “fake” categories from the generators. When training the generators INLINEFORM1 , we minimize the following which makes a comparison between the INLINEFORM2 -th and INLINEFORM3 -th categories DISPLAYFORM0
The intuition of (17) is that for each generated document embedding INLINEFORM0 , we need to decrease INLINEFORM1 , which is the probability of the generated embedding being correctly classified, and increase INLINEFORM2 , which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 . The ratio in (17) reflects these two properties.
We iteratively train (16) and (17) until the classification error of INLINEFORM0 becomes stable. The algorithm for deGAN is summarized in Algorithm 2, and Figure 2 illustrates the deGAN model..
Algorithm 2. Randomly initialize the weights of INLINEFORM0 . Initialize the discriminator INLINEFORM1 with the weights of the first layer (which takes document embeddings as the input) initialized by word embeddings, and other parameters randomly initialized. Until maximum number of iterations reached Update INLINEFORM2 according to (16) given a mini-batch of training examples INLINEFORM3 and samples from noise INLINEFORM4 . Update INLINEFORM5 according to (17) given a mini-batch of training examples INLINEFORM6 and samples form noise INLINEFORM7 . Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier.
We next show that from (15), the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario.
Proposition 1. Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 ; INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous; and that INLINEFORM6 are solutions to (15). Then INLINEFORM7 , the probability density of the document embeddings from INLINEFORM8 , INLINEFORM9 , are equal to INLINEFORM10 .
Proof. Since INLINEFORM0 is bounded, all of the integrals exhibited next are well-defined and finite. Since INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are continuous, it follows that for any parameters, INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support.
From the first line of (15), INLINEFORM0
This problem reduces to INLINEFORM0 subject to INLINEFORM1 , the solution of which is INLINEFORM2 , INLINEFORM3 . Therefore, the solution to (18) is DISPLAYFORM0
We then obtain from the second line of (15) that INLINEFORM0
From non-negativity of the Kullback-Leibler divergence, we conclude that INLINEFORM0
Experiments
In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set.
The document embedding in weGAN is the tf-idf weighted word embedding transformed by the INLINEFORM0 activation, i.e. DISPLAYFORM0
For deGAN, we use INLINEFORM0 -normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).
For weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes. All these parameters have been optimized. For the labels INLINEFORM4 in (8), we apply corpus membership of each document.
For the noise distribution INLINEFORM0 for deGAN, we apply the uniform distribution INLINEFORM1 . In (14) for deGAN, INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 -normalized tf-idf vectors for each document. For the discriminator INLINEFORM5 of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator INLINEFORM6 , we apply a learning rate of INLINEFORM7 , and for the generator INLINEFORM8 , we apply a learning rate of INLINEFORM9 , because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.
For weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch.
The CNN data set
In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics.
Meanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.
We observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.
We next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the INLINEFORM0 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.
To compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.
From Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”
We also perform dimensional reduction using t-SNE (van der Maaten and Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.” The results are shown in Figure 3.
We observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval).
The TIME data set
In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.
Table 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the INLINEFORM0 level.
From Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6.
In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.
For deGAN, we also compare the original and artificial samples in terms of the highest probability words. Table 7 shows one record for each category.
From Table 7, we observe that the produced bag-of-words are generally alike, and the words in the same sample are related to each other to some extent.
We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. We observe that the points are generated mixed but deGAN cannot reproduce the outliers.
The 20 Newsgroups data set
The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM4 level. The other results are similar to the previous two data sets and are thereby omitted here.
The Reuters-21578 data set
The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into INLINEFORM0 training documents, from which 692 validation documents are held out, and INLINEFORM1 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM2 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here.
Conclusion
In this paper, we have demonstrated the application of the GAN model on text data with multiple corpora. We have shown that the GAN model is not only able to generate images, but also able to refine word embeddings and generate document embeddings. Such models can better learn the inner structure of multi-corpus text data, and also benefit supervised learning. The improvements in supervised learning are not large but statistically significant. The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets, and is thereby recommended. The synonyms from weGAN also tend to be more relevant than the original word2vec model. The t-SNE plots show that our generated document embeddings are similarly distributed as the original ones.
Reference
M. Arjovsky, S. Chintala, and L. Bottou. (2017). Wasserstein GAN. arXiv:1701.07875.
D. Blei, A. Ng, and M. Jordan. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.
R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research. 12:2493-2537.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014).
J. Glover. (2016). Modeling documents with Generative Adversarial Networks. In Workshop on Adversarial Training (NIPS 2016).
S. Hochreiter and J. Schmidhuber. (1997). Long Short-term Memory. In Neural Computation, 9:1735-1780.
Y. Kim. Convolutional Neural Networks for Sentence Classification. (2014). In The 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).
Q. Le and T. Mikolov. (2014). Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014).
J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. (2017). Adversarial Learning for Neural Dialogue Generation. arXiv:1701.06547.
M.-Y. Liu, and O. Tuzel. (2016). Coupled Generative Adversarial Networks. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley. (2017). Least Squares Generative Adversarial Networks. arXiv:1611.04076.
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. (2013). Distributed Embeddings of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013).
T. Mikolov, K. Chen, G. Corrado, and J. Dean. (2013b). Efficient Estimation of Word Representations in Vector Space. In Workshop (ICLR 2013).
M. Mirza, S. Osindero. (2014). Conditional Generative Adversarial Nets. arXiv:1411.1784.
A. Odena. (2016). Semi-supervised Learning with Generative Adversarial Networks. arXiv:1606. 01583.
J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. (2014). In Empirical Methods in Natural Language Processing (EMNLP 2014).
O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. (2017). Language Generation with Recurrent Generative Adversarial Networks without Pre-training. In 1st Workshop on Subword and Character level models in NLP (EMNLP 2017).
S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. (2017). Adversarial Generation of Natural Language. arXiv:1705.10929.
W. Rand. (1971). Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66:846-850.
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016).
R. Socher, A. Perelygin, Alex, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).
J. Springenberg. (2016). Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. In 4th International Conference on Learning embeddings (ICLR 2016).
L. van der Maaten, and G. Hinton. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.
B. Wang, K. Liu, and J. Zhao. (2016). Conditional Generative Adversarial Networks for Commonsense Machine Comprehension. In Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).
Y. Zhang, Z. Gan, and L. Carin. (2016). Generating Text via Adversarial Training. In Workshop on Adversarial Training (NIPS 2016).
J. Zhao, M. Mathieu, and Y. LeCun. (2017). Energy-based Generative Adversarial Networks. In 5th International Conference on Learning embeddings (ICLR 2017). | CNN, TIME, 20 Newsgroups, and Reuters-21578 |
637aa32a34b20b4b0f1b5dfa08ef4e0e5ed33d52 | 637aa32a34b20b4b0f1b5dfa08ef4e0e5ed33d52_0 | Q: Do they report results only on English datasets?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | Yes |
4b8257cdd9a60087fa901da1f4250e7d910896df | 4b8257cdd9a60087fa901da1f4250e7d910896df_0 | Q: How do the authors define or exemplify 'incorrect words'?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | typos in spellings or ungrammatical words |
7e161d9facd100544fa339b06f656eb2fc64ed28 | 7e161d9facd100544fa339b06f656eb2fc64ed28_0 | Q: How many vanilla transformers do they use after applying an embedding layer?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | Unanswerable |
abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5 | abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5_0 | Q: Do they test their approach on a dataset without incomplete data?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | No |
abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5 | abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5_1 | Q: Do they test their approach on a dataset without incomplete data?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | No |
4debd7926941f1a02266b1a7be2df8ba6e79311a | 4debd7926941f1a02266b1a7be2df8ba6e79311a_0 | Q: Should their approach be applied only when dealing with incomplete data?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | No |
4debd7926941f1a02266b1a7be2df8ba6e79311a | 4debd7926941f1a02266b1a7be2df8ba6e79311a_1 | Q: Should their approach be applied only when dealing with incomplete data?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | No |
3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5 | 3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5_0 | Q: By how much do they outperform other models in the sentiment in intent classification tasks?
Text: Introduction
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
Proposed model
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
Dataset ::: Twitter Sentiment Classification
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
Dataset ::: Intent Classification from Text with STT Error
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
Experiments ::: Baseline models
Besides the already mentioned BERT, the following baseline models are also used for comparison.
Experiments ::: Baseline models ::: NLU service platforms
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
Experiments ::: Baseline models ::: Semantic hashing with classifier
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
Experiments ::: Training specifications
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
Experiments ::: Training specifications ::: NLU service platforms
No settable training configurations available in the online platforms.
Experiments ::: Training specifications ::: Semantic hashing with classifier
Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP.
Experiments ::: Training specifications ::: BERT
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
Experiments ::: Training specifications ::: Stacked DeBERT
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
Experiments ::: Results on Sentiment Classification from Incorrect Text
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
Experiments ::: Results on Intent Classification from Text with STT Error
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
Conclusion
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
Acknowledgments
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%). | In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average |
44c7c1fbac80eaea736622913d65fe6453d72828 | 44c7c1fbac80eaea736622913d65fe6453d72828_0 | Q: What is the sample size of people used to measure user satisfaction?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | 34,432 user conversations |
44c7c1fbac80eaea736622913d65fe6453d72828 | 44c7c1fbac80eaea736622913d65fe6453d72828_1 | Q: What is the sample size of people used to measure user satisfaction?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | 34,432 |
3e0c9469821cb01a75e1818f2acb668d071fcf40 | 3e0c9469821cb01a75e1818f2acb668d071fcf40_0 | Q: What are all the metrics to measure user engagement?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | overall rating, mean number of turns |
3e0c9469821cb01a75e1818f2acb668d071fcf40 | 3e0c9469821cb01a75e1818f2acb668d071fcf40_1 | Q: What are all the metrics to measure user engagement?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | overall rating, mean number of turns |
a725246bac4625e6fe99ea236a96ccb21b5f30c6 | a725246bac4625e6fe99ea236a96ccb21b5f30c6_0 | Q: What the system designs introduced?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | Amazon Conversational Bot Toolkit, natural language understanding (NLU) (nlu) module, dialog manager, knowledge bases, natural language generation (NLG) (nlg) module, text to speech (TTS) (tts) |
516626825e51ca1e8a3e0ac896c538c9d8a747c8 | 516626825e51ca1e8a3e0ac896c538c9d8a747c8_0 | Q: Do they specify the model they use for Gunrock?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | No |
77af93200138f46bb178c02f710944a01ed86481 | 77af93200138f46bb178c02f710944a01ed86481_0 | Q: Do they gather explicit user satisfaction data on Gunrock?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | Yes |
71538776757a32eee930d297f6667cd0ec2e9231 | 71538776757a32eee930d297f6667cd0ec2e9231_0 | Q: How do they correlate user backstory queries to user satisfaction?
Text: Introduction
Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).
System Architecture
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
System Architecture ::: Automatic Speech Recognition
Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2.
System Architecture ::: Natural Language Understanding
Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him" in the last segment in User_5 is replaced with “bradley cooper" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.
In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively.
System Architecture ::: Dialog Manager
We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter," triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.
Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.
In the meantime, we consider feedback signals such as “continue" and “stop" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module.
System Architecture ::: Knowledge Databases
All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology.
System Architecture ::: Natural Language Generation
In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?")
In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question.
System Architecture ::: Text To Speech
After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12.
Analysis
From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Analysis ::: Response Depth: Mean Word Count
Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.
We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.
Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts.
Analysis ::: Gunrock's Backstory and Persona
We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction.
Analysis ::: Interleaving Personal and Factual Information: Animal Module
Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!", “How long have you had Oliver?"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.
We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes", “No", or “NA" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15.
Conclusion
Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return.
Acknowledgments
We would like to acknowledge the help from Amazon in terms of financial and technical support. | modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions |
7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed | 7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed_0 | Q: Is ROUGE their only baseline?
Text: Introduction
Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.
Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.
However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.
We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further.
On Acceptability
Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .
Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).
In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation.
Method
In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two.
SLOR
SLOR assigns to a sentence $S$ a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length:
$$\text{SLOR}(S) = &\frac{1}{|S|} (\ln (p_M(S)) \\\nonumber &- \ln (p_u(S)))$$ (Eq. 8)
where $p_M(S)$ is the probability assigned to the sentence under the LM. The unigram probability $p_u(S)$ of the sentence is calculated as
$$p_u(S) = \prod _{t \in S}p(t)$$ (Eq. 9)
with $p(t)$ being the unconditional probability of a token $t$ , i.e., given no context.
The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences:
$$\textrm {(i)} ~ ~ &\textrm {He is a citizen of France.}\nonumber \\ \textrm {(ii)} ~ ~ &\textrm {He is a citizen of Tuvalu.}\nonumber $$ (Eq. 11)
Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.
We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.
WordPieces
Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.
WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079.
WPSLOR
We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR.
Experiment
Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics.
Dataset
We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.
Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted $\kappa $ BIBREF14 for the human fluency scores of the system-generated compressions as $0.337$ .
LM Hyperparameters and Training
We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.
The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for $2,000,000$ steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 .
Baseline Metrics
Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.
We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.
We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as
$$\text{NCE}(S) = \tfrac{1}{|S|} \ln (p_M(S))$$ (Eq. 22)
with $p_M(S)$ being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).
Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:
$$\text{PPL}(S) = \exp (-\text{NCE}(S))$$ (Eq. 24)
Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments.
Correlation and Evaluation Scores
Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations:
$$\rho _{X,Y} = \frac{\text{cov}(X,Y)}{\sigma _X \sigma _Y}$$ (Eq. 28)
Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size:
$$\text{MSE}_{X,Y} = \underset{f}{\min }\frac{1}{|X|}\sum \limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30)
with $f$ being a linear function. Note that, since MSE is invariant to linear transformations of $X$ but not of $Y$ , it is a non-symmetric quasi-metric. We apply it with $Y$ being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating.
Results and Discussion
As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a $0.025$ (respectively $0.008$ ) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to $0.048$ for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a $0.008$ and, respectively, $0.002$ lower error than the third best metric, ROUGE-L-mult.
Comparing WordSLOR and WPSLOR, we find no significant differences: $0.017$ for Pearson and $0.006$ for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ( $16,000$ vs. $128,000$ tokens).
Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.
The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.
Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.
Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references.
Analysis I: Fluency Evaluation per Compression System
The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ( $\text{Var}(Y) = 0.35$ vs. $\text{Var}(Y) = 0.17$ for SEQ2SEQ).
Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by $0.244$ and $0.097$ Pearson correlation as well as $0.012$ and $0.012$ MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.
For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of $0.072$ for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this.
Analysis II: Fluency Evaluation per Domain
Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection "Analysis I: Fluency Evaluation per Compression System" . This is due to the distributions of scores being uniform ( $\text{Var}(Y) \in [0.28, 0.36]$ ).
Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?
Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least $0.019$ and at most $0.051$ Pearson correlation, and at least $0.004$ and at most $0.014$ MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work.
Incorporation of Given References
ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics.
Experimental Setup
First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.
Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples.
Fluency Evaluation
Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.
Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap.
Compression Evaluation
Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references.
Criticism of Common Metrics for NLG
In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.
The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .
For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation.
Conclusion
We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation.
Acknowledgments
We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback. | No |
7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed | 7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed_1 | Q: Is ROUGE their only baseline?
Text: Introduction
Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.
Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.
However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.
We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further.
On Acceptability
Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .
Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).
In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation.
Method
In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two.
SLOR
SLOR assigns to a sentence $S$ a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length:
$$\text{SLOR}(S) = &\frac{1}{|S|} (\ln (p_M(S)) \\\nonumber &- \ln (p_u(S)))$$ (Eq. 8)
where $p_M(S)$ is the probability assigned to the sentence under the LM. The unigram probability $p_u(S)$ of the sentence is calculated as
$$p_u(S) = \prod _{t \in S}p(t)$$ (Eq. 9)
with $p(t)$ being the unconditional probability of a token $t$ , i.e., given no context.
The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences:
$$\textrm {(i)} ~ ~ &\textrm {He is a citizen of France.}\nonumber \\ \textrm {(ii)} ~ ~ &\textrm {He is a citizen of Tuvalu.}\nonumber $$ (Eq. 11)
Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.
We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.
WordPieces
Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.
WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079.
WPSLOR
We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR.
Experiment
Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics.
Dataset
We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.
Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted $\kappa $ BIBREF14 for the human fluency scores of the system-generated compressions as $0.337$ .
LM Hyperparameters and Training
We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.
The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for $2,000,000$ steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 .
Baseline Metrics
Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.
We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.
We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as
$$\text{NCE}(S) = \tfrac{1}{|S|} \ln (p_M(S))$$ (Eq. 22)
with $p_M(S)$ being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).
Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:
$$\text{PPL}(S) = \exp (-\text{NCE}(S))$$ (Eq. 24)
Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments.
Correlation and Evaluation Scores
Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations:
$$\rho _{X,Y} = \frac{\text{cov}(X,Y)}{\sigma _X \sigma _Y}$$ (Eq. 28)
Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size:
$$\text{MSE}_{X,Y} = \underset{f}{\min }\frac{1}{|X|}\sum \limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30)
with $f$ being a linear function. Note that, since MSE is invariant to linear transformations of $X$ but not of $Y$ , it is a non-symmetric quasi-metric. We apply it with $Y$ being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating.
Results and Discussion
As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a $0.025$ (respectively $0.008$ ) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to $0.048$ for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a $0.008$ and, respectively, $0.002$ lower error than the third best metric, ROUGE-L-mult.
Comparing WordSLOR and WPSLOR, we find no significant differences: $0.017$ for Pearson and $0.006$ for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ( $16,000$ vs. $128,000$ tokens).
Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.
The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.
Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.
Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references.
Analysis I: Fluency Evaluation per Compression System
The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ( $\text{Var}(Y) = 0.35$ vs. $\text{Var}(Y) = 0.17$ for SEQ2SEQ).
Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by $0.244$ and $0.097$ Pearson correlation as well as $0.012$ and $0.012$ MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.
For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of $0.072$ for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this.
Analysis II: Fluency Evaluation per Domain
Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection "Analysis I: Fluency Evaluation per Compression System" . This is due to the distributions of scores being uniform ( $\text{Var}(Y) \in [0.28, 0.36]$ ).
Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?
Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least $0.019$ and at most $0.051$ Pearson correlation, and at least $0.004$ and at most $0.014$ MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work.
Incorporation of Given References
ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics.
Experimental Setup
First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.
Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples.
Fluency Evaluation
Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.
Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap.
Compression Evaluation
Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references.
Criticism of Common Metrics for NLG
In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.
The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .
For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation.
Conclusion
We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation.
Acknowledgments
We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback. | No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU. |
3ac30bd7476d759ea5d9a5abf696d4dfc480175b | 3ac30bd7476d759ea5d9a5abf696d4dfc480175b_0 | Q: what language models do they use?
Text: Introduction
Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.
Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.
However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.
We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further.
On Acceptability
Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .
Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).
In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation.
Method
In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two.
SLOR
SLOR assigns to a sentence $S$ a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length:
$$\text{SLOR}(S) = &\frac{1}{|S|} (\ln (p_M(S)) \\\nonumber &- \ln (p_u(S)))$$ (Eq. 8)
where $p_M(S)$ is the probability assigned to the sentence under the LM. The unigram probability $p_u(S)$ of the sentence is calculated as
$$p_u(S) = \prod _{t \in S}p(t)$$ (Eq. 9)
with $p(t)$ being the unconditional probability of a token $t$ , i.e., given no context.
The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences:
$$\textrm {(i)} ~ ~ &\textrm {He is a citizen of France.}\nonumber \\ \textrm {(ii)} ~ ~ &\textrm {He is a citizen of Tuvalu.}\nonumber $$ (Eq. 11)
Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.
We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.
WordPieces
Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.
WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079.
WPSLOR
We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR.
Experiment
Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics.
Dataset
We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.
Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted $\kappa $ BIBREF14 for the human fluency scores of the system-generated compressions as $0.337$ .
LM Hyperparameters and Training
We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.
The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for $2,000,000$ steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 .
Baseline Metrics
Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.
We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.
We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as
$$\text{NCE}(S) = \tfrac{1}{|S|} \ln (p_M(S))$$ (Eq. 22)
with $p_M(S)$ being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).
Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:
$$\text{PPL}(S) = \exp (-\text{NCE}(S))$$ (Eq. 24)
Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments.
Correlation and Evaluation Scores
Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations:
$$\rho _{X,Y} = \frac{\text{cov}(X,Y)}{\sigma _X \sigma _Y}$$ (Eq. 28)
Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size:
$$\text{MSE}_{X,Y} = \underset{f}{\min }\frac{1}{|X|}\sum \limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30)
with $f$ being a linear function. Note that, since MSE is invariant to linear transformations of $X$ but not of $Y$ , it is a non-symmetric quasi-metric. We apply it with $Y$ being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating.
Results and Discussion
As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a $0.025$ (respectively $0.008$ ) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to $0.048$ for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a $0.008$ and, respectively, $0.002$ lower error than the third best metric, ROUGE-L-mult.
Comparing WordSLOR and WPSLOR, we find no significant differences: $0.017$ for Pearson and $0.006$ for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ( $16,000$ vs. $128,000$ tokens).
Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.
The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.
Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.
Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references.
Analysis I: Fluency Evaluation per Compression System
The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ( $\text{Var}(Y) = 0.35$ vs. $\text{Var}(Y) = 0.17$ for SEQ2SEQ).
Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by $0.244$ and $0.097$ Pearson correlation as well as $0.012$ and $0.012$ MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.
For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of $0.072$ for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this.
Analysis II: Fluency Evaluation per Domain
Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection "Analysis I: Fluency Evaluation per Compression System" . This is due to the distributions of scores being uniform ( $\text{Var}(Y) \in [0.28, 0.36]$ ).
Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?
Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least $0.019$ and at most $0.051$ Pearson correlation, and at least $0.004$ and at most $0.014$ MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work.
Incorporation of Given References
ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics.
Experimental Setup
First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.
Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples.
Fluency Evaluation
Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.
Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap.
Compression Evaluation
Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references.
Criticism of Common Metrics for NLG
In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.
The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .
For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation.
Conclusion
We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation.
Acknowledgments
We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback. | LSTM LMs |
0e57a0983b4731eba9470ba964d131045c8c7ea7 | 0e57a0983b4731eba9470ba964d131045c8c7ea7_0 | Q: what questions do they ask human judges?
Text: Introduction
Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.
Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.
However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.
We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further.
On Acceptability
Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .
Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).
In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation.
Method
In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two.
SLOR
SLOR assigns to a sentence $S$ a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length:
$$\text{SLOR}(S) = &\frac{1}{|S|} (\ln (p_M(S)) \\\nonumber &- \ln (p_u(S)))$$ (Eq. 8)
where $p_M(S)$ is the probability assigned to the sentence under the LM. The unigram probability $p_u(S)$ of the sentence is calculated as
$$p_u(S) = \prod _{t \in S}p(t)$$ (Eq. 9)
with $p(t)$ being the unconditional probability of a token $t$ , i.e., given no context.
The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences:
$$\textrm {(i)} ~ ~ &\textrm {He is a citizen of France.}\nonumber \\ \textrm {(ii)} ~ ~ &\textrm {He is a citizen of Tuvalu.}\nonumber $$ (Eq. 11)
Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.
We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.
WordPieces
Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.
WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079.
WPSLOR
We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR.
Experiment
Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics.
Dataset
We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.
Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted $\kappa $ BIBREF14 for the human fluency scores of the system-generated compressions as $0.337$ .
LM Hyperparameters and Training
We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.
The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for $2,000,000$ steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 .
Baseline Metrics
Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.
We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.
We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as
$$\text{NCE}(S) = \tfrac{1}{|S|} \ln (p_M(S))$$ (Eq. 22)
with $p_M(S)$ being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).
Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:
$$\text{PPL}(S) = \exp (-\text{NCE}(S))$$ (Eq. 24)
Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments.
Correlation and Evaluation Scores
Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations:
$$\rho _{X,Y} = \frac{\text{cov}(X,Y)}{\sigma _X \sigma _Y}$$ (Eq. 28)
Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size:
$$\text{MSE}_{X,Y} = \underset{f}{\min }\frac{1}{|X|}\sum \limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30)
with $f$ being a linear function. Note that, since MSE is invariant to linear transformations of $X$ but not of $Y$ , it is a non-symmetric quasi-metric. We apply it with $Y$ being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating.
Results and Discussion
As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a $0.025$ (respectively $0.008$ ) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to $0.048$ for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a $0.008$ and, respectively, $0.002$ lower error than the third best metric, ROUGE-L-mult.
Comparing WordSLOR and WPSLOR, we find no significant differences: $0.017$ for Pearson and $0.006$ for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ( $16,000$ vs. $128,000$ tokens).
Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.
The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.
Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.
Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references.
Analysis I: Fluency Evaluation per Compression System
The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ( $\text{Var}(Y) = 0.35$ vs. $\text{Var}(Y) = 0.17$ for SEQ2SEQ).
Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by $0.244$ and $0.097$ Pearson correlation as well as $0.012$ and $0.012$ MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.
For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of $0.072$ for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this.
Analysis II: Fluency Evaluation per Domain
Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection "Analysis I: Fluency Evaluation per Compression System" . This is due to the distributions of scores being uniform ( $\text{Var}(Y) \in [0.28, 0.36]$ ).
Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?
Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least $0.019$ and at most $0.051$ Pearson correlation, and at least $0.004$ and at most $0.014$ MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work.
Incorporation of Given References
ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics.
Experimental Setup
First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.
Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples.
Fluency Evaluation
Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.
Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap.
Compression Evaluation
Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references.
Criticism of Common Metrics for NLG
In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.
The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .
For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation.
Conclusion
We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation.
Acknowledgments
We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback. | Unanswerable |
f0317e48dafe117829e88e54ed2edab24b86edb1 | f0317e48dafe117829e88e54ed2edab24b86edb1_0 | Q: What misbehavior is identified?
Text: Introduction
In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images.
Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation. Also, the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually, such as in natural language understanding applications.
In the field of Machine Translation (MT), the efficient integration of multimodal information still remains a challenging task. It requires combining diverse modality vector representations with each other. These vector representations, also called context vectors, are computed in order the capture the most relevant information in a modality to output the best translation of a sentence.
To investigate the effectiveness of information obtained from images, a multimodal machine translation shared task BIBREF4 has been addressed to the MT community. The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring. Recently, BIBREF6 CalixtoLC17b proposed a doubly-attentive decoder that outperformed this baseline with less data and without rescoring.
Our paper is structured as follows. In section SECREF2 , we briefly describe our NMT model as well as the conditional GRU activation used in the decoder. We also explain how multi-modalities can be implemented within this framework. In the following sections ( SECREF3 and SECREF4 ), we detail three attention mechanisms and explain how we tweak them to work as well as possible with images. Finally, we report and analyze our results in section SECREF5 then conclude in section SECREF6 .
Neural Machine Translation
In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 ).
Text-based NMT
Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0
Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 . The encoder is a bi-directional RNN with Gated Recurrent Unit (GRU) layers BIBREF7 , BIBREF8 , where a forward RNN INLINEFORM4 reads the input sequence as it is ordered (from INLINEFORM5 to INLINEFORM6 ) and calculates a sequence of forward hidden states INLINEFORM7 . A backward RNN INLINEFORM8 reads the sequence in the reverse order (from INLINEFORM9 to INLINEFORM10 ), resulting in a sequence of backward hidden states INLINEFORM11 . We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 . Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words. The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 .
The decoder is an RNN that uses a conditional GRU (cGRU, more details in § SECREF8 ) with an attention mechanism to generate a word INLINEFORM0 at each time-step INLINEFORM1 . The cGRU uses it's previous hidden state INLINEFORM2 , the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update it's hidden state INLINEFORM5 : DISPLAYFORM0
In the process, the cGRU also computes a time-dependent context vector INLINEFORM0 . Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol. We use a deep output layer BIBREF9 to compute a vocabulary-sized vector : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are model parameters. We can parameterize the probability of decoding each word INLINEFORM4 as: DISPLAYFORM0
The initial state of the decoder INLINEFORM0 at time-step INLINEFORM1 is initialized by the following equation : DISPLAYFORM0
where INLINEFORM0 is a feedforward network with one hidden layer.
Conditional GRU
The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between (called ATT in the footnote paper). At each time-step INLINEFORM3 , REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 : DISPLAYFORM0
Then, the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 : DISPLAYFORM0
Finally, the second recurrent cell INLINEFORM0 , computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 : DISPLAYFORM0
Multimodal NMT
Recently, BIBREF6 CalixtoLC17b proposed a doubly attentive decoder (referred as the "MNMT" model in the author's paper) which can be seen as an expansion of the attention-based NMT model proposed in the previous section. Given a sequence of second a modality annotations INLINEFORM0 , we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 : DISPLAYFORM0
This new time-dependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both time-dependent context vectors INLINEFORM2 and INLINEFORM3 : DISPLAYFORM0
The probabilities for the next target word (from equation EQREF5 ) also takes into account the new context vector INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 is a new trainable parameter.
In the field of multimodal NMT, the second modality is usually an image computed into feature maps with the help of a CNN. The annotations INLINEFORM0 are spatial features (i.e. each annotation represents features for a specific region in the image) . We follow the same protocol for our experiments and describe it in section SECREF5 .
Attention-based Models
We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image.
Soft attention
Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since. One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current (intermediate) one as shown in equation EQREF11 . This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an image's description in natural language. It has been used in multimodal translation as well BIBREF6 , for which it constitutes a state-of-the-art.
The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 . It consists of a single feed-forward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 . The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 : DISPLAYFORM0
The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 -th item contains a score of how much attention should be put on the INLINEFORM3 -th annotation in order to output the best word at time INLINEFORM4 . We compute normalized scores to create an attention mask INLINEFORM5 over annotations: DISPLAYFORM0
Finally, the modality time-dependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors (equation ). In the above expressions, INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are trained parameters.
Hard Stochastic attention
This model is a stochastic and sampling-based process where, at every timestep INLINEFORM0 , we are making a hard choice to attend only one annotation. This corresponds to one spatial location in the image. Hard attention has previously been used in the context of object recognition BIBREF11 , BIBREF12 and later extended to image description generation BIBREF2 . In the context of multimodal NMT, we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images.
The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 an indicator one-hot variable which is set to 1 if the INLINEFORM1 -th annotation (out of INLINEFORM2 ) is the one used to compute the context vector INLINEFORM3 : DISPLAYFORM0
Context vector INLINEFORM0 is now seen as the random variable of this distribution. We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 . DISPLAYFORM0
The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 : DISPLAYFORM0
In order to propagate a gradient through this process, the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 : DISPLAYFORM0
To reduce variance of the estimator in equation EQREF27 , we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 -th mini-batch: DISPLAYFORM0
Local Attention
In this section, we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations. Local Attention has been used for text-based translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation. Their approach allows the model to select an image patch of varying location and zoom. Local attention uses instead the same "zoom" for all target positions and still achieved good performance. This model can be seen as a trade-off between the soft and hard attentional models. The model picks one patch in the annotation sequence (one spatial location) and selectively focuses on a small window of context around it. Even though an image can't be seen as a temporal sequence, we still hope that the model finds points of interest and selects the useful information around it. This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 . The soft attention has the drawback to attend the whole image which can be difficult to learn, especially because the number of annotations INLINEFORM0 is usually large (presumably to keep a significant spatial granularity).
More formally, at every decoding step INLINEFORM0 , the model first generates an aligned position INLINEFORM1 . Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically. These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 . The attention mask INLINEFORM8 is of size INLINEFORM9 . The model predicts INLINEFORM10 as an aligned position in the annotation sequence (referred as Predictive alignment (local-m) in the author's paper) according to the following equation: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 . Because of the sigmoid, INLINEFORM4 . We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 . In addition, a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 : DISPLAYFORM0
where standard deviation INLINEFORM0 . We obtain context vector INLINEFORM1 by following equation .
Image attention optimization
Three optimizations can be added to the attention mechanism regarding the image modality. All lead to a better use of the image by the model and improved the translation scores overall.
At every decoding step INLINEFORM0 , we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 : DISPLAYFORM0
It is then used to compute the time-dependent image context vector : DISPLAYFORM0
BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model.
We also double the output size of trainable parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence. More formally, given the image annotation sequence INLINEFORM3 , the tree matrices are of size INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. We noticed a better coverage of the objects in the image by the alpha weights.
Lastly, we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal. The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with non-linearity : DISPLAYFORM0
where INLINEFORM0 is INLINEFORM1 function. The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers (of size INLINEFORM3 respectively) to obtain INLINEFORM4 weights, one for each spatial location. We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 . Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 : DISPLAYFORM0
This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence. This attention is used on top of the others - before decoding - and is referred as "grounded image" in Table TABREF41 .
Experiments
For this experiments on Multimodal Machine Translation, we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities. For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. A test set of size 1000 is used for metrics evaluation.
Training and model details
All our models are build on top of the nematus framework BIBREF18 . The encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Word embeddings for source and target language are of 620D and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 , recurrent matrices are random orthogonal and bias vectors are all initialized to zero.
To create the image annotations used by our decoder, we used a ResNet-50 pre-trained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 . In our experiments, our decoder operates on the flattened 196 INLINEFORM1 1024 (i.e INLINEFORM2 ). We also apply dropout with a probability of 0.5 on the embeddings, on the hidden states in the bidirectional RNN in the encoder as well as in the decoder. In the decoder, we also apply dropout on the text annotations INLINEFORM3 , the image features INLINEFORM4 , on both modality context vector and on all components of the deep output layer before the readout operation. We apply dropout using one same mask in all time steps BIBREF19 .
We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 . We use the byte pair encoding algorithm on the train set to convert space-separated tokens into subwords BIBREF21 , reducing our vocabulary size to 9226 and 14957 words for English and German respectively.
All variants of our attention model were trained with ADADELTA BIBREF22 , with mini-batches of size 80 for our monomodal (text-only) NMT model and 40 for our multimodal NMT. We apply early stopping for model selection based on BLEU4 : training is halted if no improvement on the development set is observed for more than 20 epochs. We use the metrics BLEU4 BIBREF23 , METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models' translations.
Quantitative results
We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline, especially when using the stochastic attention. With improvements of +1.51 BLEU and -2.2 TER on both precision-oriented metrics, the model shows a strong similarity of the n-grams of our candidate translations with respect to the references. The more recall-oriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every time-step INLINEFORM0 , i.e. taking into account the attention weights of previous time-step INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 . Again, the largest improvement is given by the hard stochastic attention mechanism (+0.4 METEOR): because it is modeled as a decision process according to the previous choices, this may reinforce the idea of recall. We also remark interesting improvements when using the grounded mechanism, especially for the soft attention. The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at, especially compared to the stochastic attention. This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities.
Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b, our experiments results are slightly better. This is probably due to the different use of dropout and subwords. We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar (cfr. section SECREF4 ). This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section.
Qualitative results
For space-saving and ergonomic reasons, we only discuss about the hard stochastic and soft attention, the latter being a generalization of the local attention.
As we can see in Figure FIGREF44 , the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 . Because the words "hund"(dog), "wald"(forest) or "weg"(way) in left image are objects, they benefit from a high gating scalar. As a matter of fact, the attention mechanism has learned to detect the objects within a scene (at every time-step, whichever word we are decoding as shown in the right image) and the gating scalar has learned to decide whether or not we have to look at the picture (or more accurately whether or not we are translating an object). Without this scalar, the translation scores undergo a massive drop (as seen in BIBREF16 caglayan2016does) which means that the attention mechanisms don't really understand the more complex relationships between objects, what is really happening in the scene. Surprisingly, the gating scalar happens to be really low in the stochastic attention mechanism: a significant amount of sentences don't have a summed gating scalar INLINEFORM1 0.10. The model totally discards the image in the translation process.
It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:
The monomodal translation has a sentence-level BLEU of 82.16 whilst the soft attention and hard stochastic attention scores are of 16.82 and 34.45 respectively. Figure FIGREF47 shows the attention maps for both mechanism. Nevertheless, one has to concede that the use of images indubitably helps the translation as shown in the score tabular.
Conclusion and future work
We have tried different attention mechanism and tweaks for the image modality. We showed improvements and encouraging results overall on the Flickr30K Entities dataset. Even though we identified some flaws of the current attention mechanisms, we can conclude pretty safely that images are an helpful resource for the machine in a translation task. We are looking forward to try out richer and more suitable features for multimodal translation (ie. dense captioning features). Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness.
Acknowledgements
This work was partly supported by the Chist-Era project IGLU with contribution from the Belgian Fonds de la Recherche Scientique (FNRS), contract no. R.50.11.15.F, and by the FSO project VCYCLE with contribution from the Belgian Waloon Region, contract no. 1510501. | if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations |
f0317e48dafe117829e88e54ed2edab24b86edb1 | f0317e48dafe117829e88e54ed2edab24b86edb1_1 | Q: What misbehavior is identified?
Text: Introduction
In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images.
Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation. Also, the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually, such as in natural language understanding applications.
In the field of Machine Translation (MT), the efficient integration of multimodal information still remains a challenging task. It requires combining diverse modality vector representations with each other. These vector representations, also called context vectors, are computed in order the capture the most relevant information in a modality to output the best translation of a sentence.
To investigate the effectiveness of information obtained from images, a multimodal machine translation shared task BIBREF4 has been addressed to the MT community. The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring. Recently, BIBREF6 CalixtoLC17b proposed a doubly-attentive decoder that outperformed this baseline with less data and without rescoring.
Our paper is structured as follows. In section SECREF2 , we briefly describe our NMT model as well as the conditional GRU activation used in the decoder. We also explain how multi-modalities can be implemented within this framework. In the following sections ( SECREF3 and SECREF4 ), we detail three attention mechanisms and explain how we tweak them to work as well as possible with images. Finally, we report and analyze our results in section SECREF5 then conclude in section SECREF6 .
Neural Machine Translation
In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 ).
Text-based NMT
Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0
Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 . The encoder is a bi-directional RNN with Gated Recurrent Unit (GRU) layers BIBREF7 , BIBREF8 , where a forward RNN INLINEFORM4 reads the input sequence as it is ordered (from INLINEFORM5 to INLINEFORM6 ) and calculates a sequence of forward hidden states INLINEFORM7 . A backward RNN INLINEFORM8 reads the sequence in the reverse order (from INLINEFORM9 to INLINEFORM10 ), resulting in a sequence of backward hidden states INLINEFORM11 . We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 . Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words. The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 .
The decoder is an RNN that uses a conditional GRU (cGRU, more details in § SECREF8 ) with an attention mechanism to generate a word INLINEFORM0 at each time-step INLINEFORM1 . The cGRU uses it's previous hidden state INLINEFORM2 , the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update it's hidden state INLINEFORM5 : DISPLAYFORM0
In the process, the cGRU also computes a time-dependent context vector INLINEFORM0 . Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol. We use a deep output layer BIBREF9 to compute a vocabulary-sized vector : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are model parameters. We can parameterize the probability of decoding each word INLINEFORM4 as: DISPLAYFORM0
The initial state of the decoder INLINEFORM0 at time-step INLINEFORM1 is initialized by the following equation : DISPLAYFORM0
where INLINEFORM0 is a feedforward network with one hidden layer.
Conditional GRU
The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between (called ATT in the footnote paper). At each time-step INLINEFORM3 , REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 : DISPLAYFORM0
Then, the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 : DISPLAYFORM0
Finally, the second recurrent cell INLINEFORM0 , computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 : DISPLAYFORM0
Multimodal NMT
Recently, BIBREF6 CalixtoLC17b proposed a doubly attentive decoder (referred as the "MNMT" model in the author's paper) which can be seen as an expansion of the attention-based NMT model proposed in the previous section. Given a sequence of second a modality annotations INLINEFORM0 , we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 : DISPLAYFORM0
This new time-dependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both time-dependent context vectors INLINEFORM2 and INLINEFORM3 : DISPLAYFORM0
The probabilities for the next target word (from equation EQREF5 ) also takes into account the new context vector INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 is a new trainable parameter.
In the field of multimodal NMT, the second modality is usually an image computed into feature maps with the help of a CNN. The annotations INLINEFORM0 are spatial features (i.e. each annotation represents features for a specific region in the image) . We follow the same protocol for our experiments and describe it in section SECREF5 .
Attention-based Models
We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image.
Soft attention
Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since. One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current (intermediate) one as shown in equation EQREF11 . This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an image's description in natural language. It has been used in multimodal translation as well BIBREF6 , for which it constitutes a state-of-the-art.
The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 . It consists of a single feed-forward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 . The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 : DISPLAYFORM0
The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 -th item contains a score of how much attention should be put on the INLINEFORM3 -th annotation in order to output the best word at time INLINEFORM4 . We compute normalized scores to create an attention mask INLINEFORM5 over annotations: DISPLAYFORM0
Finally, the modality time-dependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors (equation ). In the above expressions, INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are trained parameters.
Hard Stochastic attention
This model is a stochastic and sampling-based process where, at every timestep INLINEFORM0 , we are making a hard choice to attend only one annotation. This corresponds to one spatial location in the image. Hard attention has previously been used in the context of object recognition BIBREF11 , BIBREF12 and later extended to image description generation BIBREF2 . In the context of multimodal NMT, we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images.
The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 an indicator one-hot variable which is set to 1 if the INLINEFORM1 -th annotation (out of INLINEFORM2 ) is the one used to compute the context vector INLINEFORM3 : DISPLAYFORM0
Context vector INLINEFORM0 is now seen as the random variable of this distribution. We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 . DISPLAYFORM0
The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 : DISPLAYFORM0
In order to propagate a gradient through this process, the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 : DISPLAYFORM0
To reduce variance of the estimator in equation EQREF27 , we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 -th mini-batch: DISPLAYFORM0
Local Attention
In this section, we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations. Local Attention has been used for text-based translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation. Their approach allows the model to select an image patch of varying location and zoom. Local attention uses instead the same "zoom" for all target positions and still achieved good performance. This model can be seen as a trade-off between the soft and hard attentional models. The model picks one patch in the annotation sequence (one spatial location) and selectively focuses on a small window of context around it. Even though an image can't be seen as a temporal sequence, we still hope that the model finds points of interest and selects the useful information around it. This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 . The soft attention has the drawback to attend the whole image which can be difficult to learn, especially because the number of annotations INLINEFORM0 is usually large (presumably to keep a significant spatial granularity).
More formally, at every decoding step INLINEFORM0 , the model first generates an aligned position INLINEFORM1 . Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically. These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 . The attention mask INLINEFORM8 is of size INLINEFORM9 . The model predicts INLINEFORM10 as an aligned position in the annotation sequence (referred as Predictive alignment (local-m) in the author's paper) according to the following equation: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 . Because of the sigmoid, INLINEFORM4 . We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 . In addition, a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 : DISPLAYFORM0
where standard deviation INLINEFORM0 . We obtain context vector INLINEFORM1 by following equation .
Image attention optimization
Three optimizations can be added to the attention mechanism regarding the image modality. All lead to a better use of the image by the model and improved the translation scores overall.
At every decoding step INLINEFORM0 , we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 : DISPLAYFORM0
It is then used to compute the time-dependent image context vector : DISPLAYFORM0
BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model.
We also double the output size of trainable parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence. More formally, given the image annotation sequence INLINEFORM3 , the tree matrices are of size INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. We noticed a better coverage of the objects in the image by the alpha weights.
Lastly, we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal. The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with non-linearity : DISPLAYFORM0
where INLINEFORM0 is INLINEFORM1 function. The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers (of size INLINEFORM3 respectively) to obtain INLINEFORM4 weights, one for each spatial location. We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 . Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 : DISPLAYFORM0
This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence. This attention is used on top of the others - before decoding - and is referred as "grounded image" in Table TABREF41 .
Experiments
For this experiments on Multimodal Machine Translation, we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities. For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. A test set of size 1000 is used for metrics evaluation.
Training and model details
All our models are build on top of the nematus framework BIBREF18 . The encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Word embeddings for source and target language are of 620D and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 , recurrent matrices are random orthogonal and bias vectors are all initialized to zero.
To create the image annotations used by our decoder, we used a ResNet-50 pre-trained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 . In our experiments, our decoder operates on the flattened 196 INLINEFORM1 1024 (i.e INLINEFORM2 ). We also apply dropout with a probability of 0.5 on the embeddings, on the hidden states in the bidirectional RNN in the encoder as well as in the decoder. In the decoder, we also apply dropout on the text annotations INLINEFORM3 , the image features INLINEFORM4 , on both modality context vector and on all components of the deep output layer before the readout operation. We apply dropout using one same mask in all time steps BIBREF19 .
We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 . We use the byte pair encoding algorithm on the train set to convert space-separated tokens into subwords BIBREF21 , reducing our vocabulary size to 9226 and 14957 words for English and German respectively.
All variants of our attention model were trained with ADADELTA BIBREF22 , with mini-batches of size 80 for our monomodal (text-only) NMT model and 40 for our multimodal NMT. We apply early stopping for model selection based on BLEU4 : training is halted if no improvement on the development set is observed for more than 20 epochs. We use the metrics BLEU4 BIBREF23 , METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models' translations.
Quantitative results
We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline, especially when using the stochastic attention. With improvements of +1.51 BLEU and -2.2 TER on both precision-oriented metrics, the model shows a strong similarity of the n-grams of our candidate translations with respect to the references. The more recall-oriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every time-step INLINEFORM0 , i.e. taking into account the attention weights of previous time-step INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 . Again, the largest improvement is given by the hard stochastic attention mechanism (+0.4 METEOR): because it is modeled as a decision process according to the previous choices, this may reinforce the idea of recall. We also remark interesting improvements when using the grounded mechanism, especially for the soft attention. The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at, especially compared to the stochastic attention. This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities.
Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b, our experiments results are slightly better. This is probably due to the different use of dropout and subwords. We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar (cfr. section SECREF4 ). This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section.
Qualitative results
For space-saving and ergonomic reasons, we only discuss about the hard stochastic and soft attention, the latter being a generalization of the local attention.
As we can see in Figure FIGREF44 , the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 . Because the words "hund"(dog), "wald"(forest) or "weg"(way) in left image are objects, they benefit from a high gating scalar. As a matter of fact, the attention mechanism has learned to detect the objects within a scene (at every time-step, whichever word we are decoding as shown in the right image) and the gating scalar has learned to decide whether or not we have to look at the picture (or more accurately whether or not we are translating an object). Without this scalar, the translation scores undergo a massive drop (as seen in BIBREF16 caglayan2016does) which means that the attention mechanisms don't really understand the more complex relationships between objects, what is really happening in the scene. Surprisingly, the gating scalar happens to be really low in the stochastic attention mechanism: a significant amount of sentences don't have a summed gating scalar INLINEFORM1 0.10. The model totally discards the image in the translation process.
It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:
The monomodal translation has a sentence-level BLEU of 82.16 whilst the soft attention and hard stochastic attention scores are of 16.82 and 34.45 respectively. Figure FIGREF47 shows the attention maps for both mechanism. Nevertheless, one has to concede that the use of images indubitably helps the translation as shown in the score tabular.
Conclusion and future work
We have tried different attention mechanism and tweaks for the image modality. We showed improvements and encouraging results overall on the Flickr30K Entities dataset. Even though we identified some flaws of the current attention mechanisms, we can conclude pretty safely that images are an helpful resource for the machine in a translation task. We are looking forward to try out richer and more suitable features for multimodal translation (ie. dense captioning features). Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness.
Acknowledgements
This work was partly supported by the Chist-Era project IGLU with contribution from the Belgian Fonds de la Recherche Scientique (FNRS), contract no. R.50.11.15.F, and by the FSO project VCYCLE with contribution from the Belgian Waloon Region, contract no. 1510501. | if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations |
ec91b87c3f45df050e4e16018d2bf5b62e4ca298 | ec91b87c3f45df050e4e16018d2bf5b62e4ca298_0 | Q: What is the baseline used?
Text: Introduction
In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images.
Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation. Also, the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually, such as in natural language understanding applications.
In the field of Machine Translation (MT), the efficient integration of multimodal information still remains a challenging task. It requires combining diverse modality vector representations with each other. These vector representations, also called context vectors, are computed in order the capture the most relevant information in a modality to output the best translation of a sentence.
To investigate the effectiveness of information obtained from images, a multimodal machine translation shared task BIBREF4 has been addressed to the MT community. The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring. Recently, BIBREF6 CalixtoLC17b proposed a doubly-attentive decoder that outperformed this baseline with less data and without rescoring.
Our paper is structured as follows. In section SECREF2 , we briefly describe our NMT model as well as the conditional GRU activation used in the decoder. We also explain how multi-modalities can be implemented within this framework. In the following sections ( SECREF3 and SECREF4 ), we detail three attention mechanisms and explain how we tweak them to work as well as possible with images. Finally, we report and analyze our results in section SECREF5 then conclude in section SECREF6 .
Neural Machine Translation
In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 ).
Text-based NMT
Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0
Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 . The encoder is a bi-directional RNN with Gated Recurrent Unit (GRU) layers BIBREF7 , BIBREF8 , where a forward RNN INLINEFORM4 reads the input sequence as it is ordered (from INLINEFORM5 to INLINEFORM6 ) and calculates a sequence of forward hidden states INLINEFORM7 . A backward RNN INLINEFORM8 reads the sequence in the reverse order (from INLINEFORM9 to INLINEFORM10 ), resulting in a sequence of backward hidden states INLINEFORM11 . We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 . Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words. The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 .
The decoder is an RNN that uses a conditional GRU (cGRU, more details in § SECREF8 ) with an attention mechanism to generate a word INLINEFORM0 at each time-step INLINEFORM1 . The cGRU uses it's previous hidden state INLINEFORM2 , the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update it's hidden state INLINEFORM5 : DISPLAYFORM0
In the process, the cGRU also computes a time-dependent context vector INLINEFORM0 . Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol. We use a deep output layer BIBREF9 to compute a vocabulary-sized vector : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are model parameters. We can parameterize the probability of decoding each word INLINEFORM4 as: DISPLAYFORM0
The initial state of the decoder INLINEFORM0 at time-step INLINEFORM1 is initialized by the following equation : DISPLAYFORM0
where INLINEFORM0 is a feedforward network with one hidden layer.
Conditional GRU
The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between (called ATT in the footnote paper). At each time-step INLINEFORM3 , REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 : DISPLAYFORM0
Then, the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 : DISPLAYFORM0
Finally, the second recurrent cell INLINEFORM0 , computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 : DISPLAYFORM0
Multimodal NMT
Recently, BIBREF6 CalixtoLC17b proposed a doubly attentive decoder (referred as the "MNMT" model in the author's paper) which can be seen as an expansion of the attention-based NMT model proposed in the previous section. Given a sequence of second a modality annotations INLINEFORM0 , we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 : DISPLAYFORM0
This new time-dependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both time-dependent context vectors INLINEFORM2 and INLINEFORM3 : DISPLAYFORM0
The probabilities for the next target word (from equation EQREF5 ) also takes into account the new context vector INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 is a new trainable parameter.
In the field of multimodal NMT, the second modality is usually an image computed into feature maps with the help of a CNN. The annotations INLINEFORM0 are spatial features (i.e. each annotation represents features for a specific region in the image) . We follow the same protocol for our experiments and describe it in section SECREF5 .
Attention-based Models
We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image.
Soft attention
Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since. One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current (intermediate) one as shown in equation EQREF11 . This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an image's description in natural language. It has been used in multimodal translation as well BIBREF6 , for which it constitutes a state-of-the-art.
The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 . It consists of a single feed-forward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 . The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 : DISPLAYFORM0
The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 -th item contains a score of how much attention should be put on the INLINEFORM3 -th annotation in order to output the best word at time INLINEFORM4 . We compute normalized scores to create an attention mask INLINEFORM5 over annotations: DISPLAYFORM0
Finally, the modality time-dependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors (equation ). In the above expressions, INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are trained parameters.
Hard Stochastic attention
This model is a stochastic and sampling-based process where, at every timestep INLINEFORM0 , we are making a hard choice to attend only one annotation. This corresponds to one spatial location in the image. Hard attention has previously been used in the context of object recognition BIBREF11 , BIBREF12 and later extended to image description generation BIBREF2 . In the context of multimodal NMT, we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images.
The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 an indicator one-hot variable which is set to 1 if the INLINEFORM1 -th annotation (out of INLINEFORM2 ) is the one used to compute the context vector INLINEFORM3 : DISPLAYFORM0
Context vector INLINEFORM0 is now seen as the random variable of this distribution. We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 . DISPLAYFORM0
The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 : DISPLAYFORM0
In order to propagate a gradient through this process, the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 : DISPLAYFORM0
To reduce variance of the estimator in equation EQREF27 , we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 -th mini-batch: DISPLAYFORM0
Local Attention
In this section, we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations. Local Attention has been used for text-based translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation. Their approach allows the model to select an image patch of varying location and zoom. Local attention uses instead the same "zoom" for all target positions and still achieved good performance. This model can be seen as a trade-off between the soft and hard attentional models. The model picks one patch in the annotation sequence (one spatial location) and selectively focuses on a small window of context around it. Even though an image can't be seen as a temporal sequence, we still hope that the model finds points of interest and selects the useful information around it. This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 . The soft attention has the drawback to attend the whole image which can be difficult to learn, especially because the number of annotations INLINEFORM0 is usually large (presumably to keep a significant spatial granularity).
More formally, at every decoding step INLINEFORM0 , the model first generates an aligned position INLINEFORM1 . Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically. These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 . The attention mask INLINEFORM8 is of size INLINEFORM9 . The model predicts INLINEFORM10 as an aligned position in the annotation sequence (referred as Predictive alignment (local-m) in the author's paper) according to the following equation: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 . Because of the sigmoid, INLINEFORM4 . We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 . In addition, a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 : DISPLAYFORM0
where standard deviation INLINEFORM0 . We obtain context vector INLINEFORM1 by following equation .
Image attention optimization
Three optimizations can be added to the attention mechanism regarding the image modality. All lead to a better use of the image by the model and improved the translation scores overall.
At every decoding step INLINEFORM0 , we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 : DISPLAYFORM0
It is then used to compute the time-dependent image context vector : DISPLAYFORM0
BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model.
We also double the output size of trainable parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence. More formally, given the image annotation sequence INLINEFORM3 , the tree matrices are of size INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. We noticed a better coverage of the objects in the image by the alpha weights.
Lastly, we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal. The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with non-linearity : DISPLAYFORM0
where INLINEFORM0 is INLINEFORM1 function. The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers (of size INLINEFORM3 respectively) to obtain INLINEFORM4 weights, one for each spatial location. We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 . Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 : DISPLAYFORM0
This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence. This attention is used on top of the others - before decoding - and is referred as "grounded image" in Table TABREF41 .
Experiments
For this experiments on Multimodal Machine Translation, we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities. For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. A test set of size 1000 is used for metrics evaluation.
Training and model details
All our models are build on top of the nematus framework BIBREF18 . The encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Word embeddings for source and target language are of 620D and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 , recurrent matrices are random orthogonal and bias vectors are all initialized to zero.
To create the image annotations used by our decoder, we used a ResNet-50 pre-trained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 . In our experiments, our decoder operates on the flattened 196 INLINEFORM1 1024 (i.e INLINEFORM2 ). We also apply dropout with a probability of 0.5 on the embeddings, on the hidden states in the bidirectional RNN in the encoder as well as in the decoder. In the decoder, we also apply dropout on the text annotations INLINEFORM3 , the image features INLINEFORM4 , on both modality context vector and on all components of the deep output layer before the readout operation. We apply dropout using one same mask in all time steps BIBREF19 .
We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 . We use the byte pair encoding algorithm on the train set to convert space-separated tokens into subwords BIBREF21 , reducing our vocabulary size to 9226 and 14957 words for English and German respectively.
All variants of our attention model were trained with ADADELTA BIBREF22 , with mini-batches of size 80 for our monomodal (text-only) NMT model and 40 for our multimodal NMT. We apply early stopping for model selection based on BLEU4 : training is halted if no improvement on the development set is observed for more than 20 epochs. We use the metrics BLEU4 BIBREF23 , METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models' translations.
Quantitative results
We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline, especially when using the stochastic attention. With improvements of +1.51 BLEU and -2.2 TER on both precision-oriented metrics, the model shows a strong similarity of the n-grams of our candidate translations with respect to the references. The more recall-oriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every time-step INLINEFORM0 , i.e. taking into account the attention weights of previous time-step INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 . Again, the largest improvement is given by the hard stochastic attention mechanism (+0.4 METEOR): because it is modeled as a decision process according to the previous choices, this may reinforce the idea of recall. We also remark interesting improvements when using the grounded mechanism, especially for the soft attention. The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at, especially compared to the stochastic attention. This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities.
Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b, our experiments results are slightly better. This is probably due to the different use of dropout and subwords. We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar (cfr. section SECREF4 ). This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section.
Qualitative results
For space-saving and ergonomic reasons, we only discuss about the hard stochastic and soft attention, the latter being a generalization of the local attention.
As we can see in Figure FIGREF44 , the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 . Because the words "hund"(dog), "wald"(forest) or "weg"(way) in left image are objects, they benefit from a high gating scalar. As a matter of fact, the attention mechanism has learned to detect the objects within a scene (at every time-step, whichever word we are decoding as shown in the right image) and the gating scalar has learned to decide whether or not we have to look at the picture (or more accurately whether or not we are translating an object). Without this scalar, the translation scores undergo a massive drop (as seen in BIBREF16 caglayan2016does) which means that the attention mechanisms don't really understand the more complex relationships between objects, what is really happening in the scene. Surprisingly, the gating scalar happens to be really low in the stochastic attention mechanism: a significant amount of sentences don't have a summed gating scalar INLINEFORM1 0.10. The model totally discards the image in the translation process.
It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:
The monomodal translation has a sentence-level BLEU of 82.16 whilst the soft attention and hard stochastic attention scores are of 16.82 and 34.45 respectively. Figure FIGREF47 shows the attention maps for both mechanism. Nevertheless, one has to concede that the use of images indubitably helps the translation as shown in the score tabular.
Conclusion and future work
We have tried different attention mechanism and tweaks for the image modality. We showed improvements and encouraging results overall on the Flickr30K Entities dataset. Even though we identified some flaws of the current attention mechanisms, we can conclude pretty safely that images are an helpful resource for the machine in a translation task. We are looking forward to try out richer and more suitable features for multimodal translation (ie. dense captioning features). Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness.
Acknowledgements
This work was partly supported by the Chist-Era project IGLU with contribution from the Belgian Fonds de la Recherche Scientique (FNRS), contract no. R.50.11.15.F, and by the FSO project VCYCLE with contribution from the Belgian Waloon Region, contract no. 1510501. | Unanswerable |
f129c97a81d81d32633c94111018880a7ffe16d1 | f129c97a81d81d32633c94111018880a7ffe16d1_0 | Q: Which attention mechanisms do they compare?
Text: Introduction
In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images.
Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation. Also, the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually, such as in natural language understanding applications.
In the field of Machine Translation (MT), the efficient integration of multimodal information still remains a challenging task. It requires combining diverse modality vector representations with each other. These vector representations, also called context vectors, are computed in order the capture the most relevant information in a modality to output the best translation of a sentence.
To investigate the effectiveness of information obtained from images, a multimodal machine translation shared task BIBREF4 has been addressed to the MT community. The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring. Recently, BIBREF6 CalixtoLC17b proposed a doubly-attentive decoder that outperformed this baseline with less data and without rescoring.
Our paper is structured as follows. In section SECREF2 , we briefly describe our NMT model as well as the conditional GRU activation used in the decoder. We also explain how multi-modalities can be implemented within this framework. In the following sections ( SECREF3 and SECREF4 ), we detail three attention mechanisms and explain how we tweak them to work as well as possible with images. Finally, we report and analyze our results in section SECREF5 then conclude in section SECREF6 .
Neural Machine Translation
In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 ).
Text-based NMT
Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0
Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 . The encoder is a bi-directional RNN with Gated Recurrent Unit (GRU) layers BIBREF7 , BIBREF8 , where a forward RNN INLINEFORM4 reads the input sequence as it is ordered (from INLINEFORM5 to INLINEFORM6 ) and calculates a sequence of forward hidden states INLINEFORM7 . A backward RNN INLINEFORM8 reads the sequence in the reverse order (from INLINEFORM9 to INLINEFORM10 ), resulting in a sequence of backward hidden states INLINEFORM11 . We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 . Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words. The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 .
The decoder is an RNN that uses a conditional GRU (cGRU, more details in § SECREF8 ) with an attention mechanism to generate a word INLINEFORM0 at each time-step INLINEFORM1 . The cGRU uses it's previous hidden state INLINEFORM2 , the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update it's hidden state INLINEFORM5 : DISPLAYFORM0
In the process, the cGRU also computes a time-dependent context vector INLINEFORM0 . Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol. We use a deep output layer BIBREF9 to compute a vocabulary-sized vector : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are model parameters. We can parameterize the probability of decoding each word INLINEFORM4 as: DISPLAYFORM0
The initial state of the decoder INLINEFORM0 at time-step INLINEFORM1 is initialized by the following equation : DISPLAYFORM0
where INLINEFORM0 is a feedforward network with one hidden layer.
Conditional GRU
The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between (called ATT in the footnote paper). At each time-step INLINEFORM3 , REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 : DISPLAYFORM0
Then, the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 : DISPLAYFORM0
Finally, the second recurrent cell INLINEFORM0 , computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 : DISPLAYFORM0
Multimodal NMT
Recently, BIBREF6 CalixtoLC17b proposed a doubly attentive decoder (referred as the "MNMT" model in the author's paper) which can be seen as an expansion of the attention-based NMT model proposed in the previous section. Given a sequence of second a modality annotations INLINEFORM0 , we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 : DISPLAYFORM0
This new time-dependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both time-dependent context vectors INLINEFORM2 and INLINEFORM3 : DISPLAYFORM0
The probabilities for the next target word (from equation EQREF5 ) also takes into account the new context vector INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 is a new trainable parameter.
In the field of multimodal NMT, the second modality is usually an image computed into feature maps with the help of a CNN. The annotations INLINEFORM0 are spatial features (i.e. each annotation represents features for a specific region in the image) . We follow the same protocol for our experiments and describe it in section SECREF5 .
Attention-based Models
We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image.
Soft attention
Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since. One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current (intermediate) one as shown in equation EQREF11 . This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an image's description in natural language. It has been used in multimodal translation as well BIBREF6 , for which it constitutes a state-of-the-art.
The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 . It consists of a single feed-forward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 . The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 : DISPLAYFORM0
The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 -th item contains a score of how much attention should be put on the INLINEFORM3 -th annotation in order to output the best word at time INLINEFORM4 . We compute normalized scores to create an attention mask INLINEFORM5 over annotations: DISPLAYFORM0
Finally, the modality time-dependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors (equation ). In the above expressions, INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are trained parameters.
Hard Stochastic attention
This model is a stochastic and sampling-based process where, at every timestep INLINEFORM0 , we are making a hard choice to attend only one annotation. This corresponds to one spatial location in the image. Hard attention has previously been used in the context of object recognition BIBREF11 , BIBREF12 and later extended to image description generation BIBREF2 . In the context of multimodal NMT, we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images.
The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 an indicator one-hot variable which is set to 1 if the INLINEFORM1 -th annotation (out of INLINEFORM2 ) is the one used to compute the context vector INLINEFORM3 : DISPLAYFORM0
Context vector INLINEFORM0 is now seen as the random variable of this distribution. We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 . DISPLAYFORM0
The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 : DISPLAYFORM0
In order to propagate a gradient through this process, the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 : DISPLAYFORM0
To reduce variance of the estimator in equation EQREF27 , we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 -th mini-batch: DISPLAYFORM0
Local Attention
In this section, we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations. Local Attention has been used for text-based translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation. Their approach allows the model to select an image patch of varying location and zoom. Local attention uses instead the same "zoom" for all target positions and still achieved good performance. This model can be seen as a trade-off between the soft and hard attentional models. The model picks one patch in the annotation sequence (one spatial location) and selectively focuses on a small window of context around it. Even though an image can't be seen as a temporal sequence, we still hope that the model finds points of interest and selects the useful information around it. This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 . The soft attention has the drawback to attend the whole image which can be difficult to learn, especially because the number of annotations INLINEFORM0 is usually large (presumably to keep a significant spatial granularity).
More formally, at every decoding step INLINEFORM0 , the model first generates an aligned position INLINEFORM1 . Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically. These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 . The attention mask INLINEFORM8 is of size INLINEFORM9 . The model predicts INLINEFORM10 as an aligned position in the annotation sequence (referred as Predictive alignment (local-m) in the author's paper) according to the following equation: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 . Because of the sigmoid, INLINEFORM4 . We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 . In addition, a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 : DISPLAYFORM0
where standard deviation INLINEFORM0 . We obtain context vector INLINEFORM1 by following equation .
Image attention optimization
Three optimizations can be added to the attention mechanism regarding the image modality. All lead to a better use of the image by the model and improved the translation scores overall.
At every decoding step INLINEFORM0 , we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 : DISPLAYFORM0
It is then used to compute the time-dependent image context vector : DISPLAYFORM0
BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model.
We also double the output size of trainable parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence. More formally, given the image annotation sequence INLINEFORM3 , the tree matrices are of size INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. We noticed a better coverage of the objects in the image by the alpha weights.
Lastly, we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal. The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with non-linearity : DISPLAYFORM0
where INLINEFORM0 is INLINEFORM1 function. The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers (of size INLINEFORM3 respectively) to obtain INLINEFORM4 weights, one for each spatial location. We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 . Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 : DISPLAYFORM0
This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence. This attention is used on top of the others - before decoding - and is referred as "grounded image" in Table TABREF41 .
Experiments
For this experiments on Multimodal Machine Translation, we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities. For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. A test set of size 1000 is used for metrics evaluation.
Training and model details
All our models are build on top of the nematus framework BIBREF18 . The encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Word embeddings for source and target language are of 620D and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 , recurrent matrices are random orthogonal and bias vectors are all initialized to zero.
To create the image annotations used by our decoder, we used a ResNet-50 pre-trained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 . In our experiments, our decoder operates on the flattened 196 INLINEFORM1 1024 (i.e INLINEFORM2 ). We also apply dropout with a probability of 0.5 on the embeddings, on the hidden states in the bidirectional RNN in the encoder as well as in the decoder. In the decoder, we also apply dropout on the text annotations INLINEFORM3 , the image features INLINEFORM4 , on both modality context vector and on all components of the deep output layer before the readout operation. We apply dropout using one same mask in all time steps BIBREF19 .
We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 . We use the byte pair encoding algorithm on the train set to convert space-separated tokens into subwords BIBREF21 , reducing our vocabulary size to 9226 and 14957 words for English and German respectively.
All variants of our attention model were trained with ADADELTA BIBREF22 , with mini-batches of size 80 for our monomodal (text-only) NMT model and 40 for our multimodal NMT. We apply early stopping for model selection based on BLEU4 : training is halted if no improvement on the development set is observed for more than 20 epochs. We use the metrics BLEU4 BIBREF23 , METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models' translations.
Quantitative results
We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline, especially when using the stochastic attention. With improvements of +1.51 BLEU and -2.2 TER on both precision-oriented metrics, the model shows a strong similarity of the n-grams of our candidate translations with respect to the references. The more recall-oriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every time-step INLINEFORM0 , i.e. taking into account the attention weights of previous time-step INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 . Again, the largest improvement is given by the hard stochastic attention mechanism (+0.4 METEOR): because it is modeled as a decision process according to the previous choices, this may reinforce the idea of recall. We also remark interesting improvements when using the grounded mechanism, especially for the soft attention. The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at, especially compared to the stochastic attention. This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities.
Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b, our experiments results are slightly better. This is probably due to the different use of dropout and subwords. We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar (cfr. section SECREF4 ). This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section.
Qualitative results
For space-saving and ergonomic reasons, we only discuss about the hard stochastic and soft attention, the latter being a generalization of the local attention.
As we can see in Figure FIGREF44 , the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 . Because the words "hund"(dog), "wald"(forest) or "weg"(way) in left image are objects, they benefit from a high gating scalar. As a matter of fact, the attention mechanism has learned to detect the objects within a scene (at every time-step, whichever word we are decoding as shown in the right image) and the gating scalar has learned to decide whether or not we have to look at the picture (or more accurately whether or not we are translating an object). Without this scalar, the translation scores undergo a massive drop (as seen in BIBREF16 caglayan2016does) which means that the attention mechanisms don't really understand the more complex relationships between objects, what is really happening in the scene. Surprisingly, the gating scalar happens to be really low in the stochastic attention mechanism: a significant amount of sentences don't have a summed gating scalar INLINEFORM1 0.10. The model totally discards the image in the translation process.
It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:
The monomodal translation has a sentence-level BLEU of 82.16 whilst the soft attention and hard stochastic attention scores are of 16.82 and 34.45 respectively. Figure FIGREF47 shows the attention maps for both mechanism. Nevertheless, one has to concede that the use of images indubitably helps the translation as shown in the score tabular.
Conclusion and future work
We have tried different attention mechanism and tweaks for the image modality. We showed improvements and encouraging results overall on the Flickr30K Entities dataset. Even though we identified some flaws of the current attention mechanisms, we can conclude pretty safely that images are an helpful resource for the machine in a translation task. We are looking forward to try out richer and more suitable features for multimodal translation (ie. dense captioning features). Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness.
Acknowledgements
This work was partly supported by the Chist-Era project IGLU with contribution from the Belgian Fonds de la Recherche Scientique (FNRS), contract no. R.50.11.15.F, and by the FSO project VCYCLE with contribution from the Belgian Waloon Region, contract no. 1510501. | Soft attention, Hard Stochastic attention, Local Attention |
100cf8b72d46da39fedfe77ec939fb44f25de77f | 100cf8b72d46da39fedfe77ec939fb44f25de77f_0 | Q: Which paired corpora did they use in the other experiment?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 |
100cf8b72d46da39fedfe77ec939fb44f25de77f | 100cf8b72d46da39fedfe77ec939fb44f25de77f_1 | Q: Which paired corpora did they use in the other experiment?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | Chinese dataset BIBREF0 |
8cc56fc44136498471754186cfa04056017b4e54 | 8cc56fc44136498471754186cfa04056017b4e54_0 | Q: By how much does their system outperform the lexicon-based models?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 .
Under the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029 |
8cc56fc44136498471754186cfa04056017b4e54 | 8cc56fc44136498471754186cfa04056017b4e54_1 | Q: By how much does their system outperform the lexicon-based models?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc. |
5fa431b14732b3c47ab6eec373f51f2bca04f614 | 5fa431b14732b3c47ab6eec373f51f2bca04f614_0 | Q: Which lexicon-based models did they compare with?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | TF-IDF, NVDM |
33ccbc401b224a48fba4b167e86019ffad1787fb | 33ccbc401b224a48fba4b167e86019ffad1787fb_0 | Q: How many comments were used?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | from 50K to 4.8M |
cca74448ab0c518edd5fc53454affd67ac1a201c | cca74448ab0c518edd5fc53454affd67ac1a201c_0 | Q: How many articles did they have?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | 198,112 |
b69ffec1c607bfe5aa4d39254e0770a3433a191b | b69ffec1c607bfe5aa4d39254e0770a3433a191b_0 | Q: What news comment dataset was used?
Text: Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.
Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.
Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.
To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.
The contributions of this work are as follows:
Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges.
Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system.
The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.
One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.
There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments.
Solutions
Facing the above challenges, we provide three solutions to the problems.
Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).
The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.
Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics.
Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning.
Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.
The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.
To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0
The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model.
Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.
We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.
In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0
where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1
We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .
At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 .
Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0
where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 .
Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch.
Baselines
We compare our model with several unsupervised models and supervised models.
Unsupervised baseline models are as follows:
TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.
LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.
NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.
The supervised baseline models are:
S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.
IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 .
Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:
Correct: The ground-truth comments of the corresponding news provided by the human.
Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.
Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.
Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.
Following previous work, we measure the rank in terms of the following metrics:
Recall@k: The proportion of human comments found in the top-k recommendations.
Mean Rank (MR): The mean rank of the human comments.
Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.
The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.
Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.
We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance.
Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .
Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.
Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.
Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.
IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments.
Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 .
Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.
Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN.
Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | Chinese dataset BIBREF0 |
f5cf8738e8d211095bb89350ed05ee7f9997eb19 | f5cf8738e8d211095bb89350ed05ee7f9997eb19_0 | Q: By how much do they outperform standard BERT?
Text: Introduction
With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task.
In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task.
Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT; BIBREF1) outperformed previous state-of-the-art methods by a large margin on various NLP tasks. We adopt BERT for text-based classification and extend the model with additional metadata provided in the context of the shared task, such as author, publisher, publishing date, etc.
A key contribution of this paper is the inclusion of additional (meta) data using a state-of-the-art approach for text processing. Being a transfer learning approach, it facilitates the task solution with external knowledge for a setup in which relatively little training data is available. More precisely, we enrich BERT, as our pre-trained text representation model, with knowledge graph embeddings that are based on Wikidata BIBREF2, add metadata provided by the shared task organisers (title, author(s), publishing date, etc.) and collect additional information on authors for this particular document classification task. As we do not rely on text-based features alone but also utilize document metadata, we consider this as a document classification problem. The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task.
Related Work
A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution.
With regard to text and document classification, BERT (Bidirectional Encoder Representations from Transformers) BIBREF1 is a pre-trained embedding model that yields state of the art results in a wide span of NLP tasks, such as question answering, textual entailment and natural language inference learning BIBREF8. BIBREF9 are among the first to apply BERT to document classification. Acknowledging challenges like incorporating syntactic information, or predicting multiple labels, they describe how they adapt BERT for the document classification task. In general, they introduce a fully-connected layer over the final hidden state that contains one neuron each representing an input token, and further optimize the model choosing soft-max classifier parameters to weight the hidden state layer. They report state of the art results in experiments based on four popular datasets. An approach exploiting Hierarchical Attention Networks is presented by BIBREF10. Their model introduces a hierarchical structure to represent the hierarchical nature of a document. BIBREF10 derive attention on the word and sentence level, which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations. They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points. BIBREF11 (the organisers of the GermEval 2019 shared task on hierarchical text classification) use shallow capsule networks, reporting that these work well on structured data for example in the field of visual inference, and outperform CNNs, LSTMs and SVMs in this area. They use the Web of Science (WOS) dataset and introduce a new real-world scenario dataset called Blurb Genre Collection (BGC).
With regard to external resources to enrich the classification task, BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding. They use structural knowledge represented by Wikidata entities and their relation to each other. A mix of large-scale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13, considering lexical, syntactic, and structural information. BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia. Based on a bag of words approach, they derive a thesaurus of concepts from Wikipedia and use it for document expansion. The resulting document representation improves the performance of an SVM classifier for predicting text categories.
Dataset and Task
Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:
A title.
A list of authors. The average number of authors per book is 1.13, with most books (14,970) having a single author and one outlier with 28 authors.
A short descriptive text (blurb) with an average length of 95 words.
A URL pointing to a page on the publisher's website.
An ISBN number.
The date of publication.
The books are labeled according to the hierarchy used by the German publisher Random House. This taxonomy includes a mix of genre and topical categories. It has eight top-level genre categories, 93 on the second level and 242 on the most detailed third level. The eight top-level labels are `Ganzheitliches Bewusstsein' (holistic awareness/consciousness), `Künste' (arts), `Sachbuch' (non-fiction), `Kinderbuch & Jugendbuch' (children and young adults), `Ratgeber' (counselor/advisor), `Literatur & Unterhaltung' (literature and entertainment), `Glaube & Ethik' (faith and ethics), `Architektur & Garten' (architecture and garden). We refer to the shared task description for details on the lower levels of the ontology.
Note that we do not have access to any of the full texts. Hence, we use the blurbs as input for BERT. Given the relatively short average length of the blurbs, this considerably decreases the amount of data points available for a single book.
The shared task is divided into two sub-task. Sub-task A is to classify a book, using the information provided as explained above, according to the top-level of the taxonomy, selecting one or more of the eight labels. Sub-task B is to classify a book according to the detailed taxonomy, specifying labels on the second and third level of the taxonomy as well (in total 343 labels). This renders both sub-tasks a multi-label classification task.
Experiments
As indicated in Section SECREF1, we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification. We use the pre-trained models and enrich them with additional metadata and tune the models for both classification sub-tasks.
Experiments ::: Metadata Features
In addition to the metadata provided by the organisers of the shared task (see Section SECREF3), we add the following features.
Number of authors.
Academic title (Dr. or Prof.), if found in author names (0 or 1).
Number of words in title.
Number of words in blurb.
Length of longest word in blurb.
Mean word length in blurb.
Median word length in blurb.
Age in years after publication date.
Probability of first author being male or female based on the Gender-by-Name dataset. Available for 87% of books in training set (see Table TABREF21).
The statistics (length, average, etc.) regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier. For example, books labeled `Kinderbuch & Jugendbuch' (children and young adults) have a title that is on average 5.47 words long, whereas books labeled `Künste' (arts) on average have shorter titles of 3.46 words. The binary feature for academic title is based on the assumption that academics are more likely to write non-fiction. The gender feature is included to explore (and potentially exploit) whether or not there is a gender-bias for particular genres.
Experiments ::: Author Embeddings
Whereas one should not judge a book by its cover, we argue that additional information on the author can support the classification task. Authors often adhere to their specific style of writing and are likely to specialize in a specific genre.
To be precise, we want to include author identity information, which can be retrieved by selecting particular properties from, for example, the Wikidata knowledge graph (such as date of birth, nationality, or other biographical features). A drawback of this approach, however, is that one has to manually select and filter those properties that improve classification performance. This is why, instead, we follow a more generic approach and utilize automatically generated graph embeddings as author representations.
Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph. The node distance can be interpreted as topical similarity between the corresponding authors.
We rely on pre-trained embeddings based on PyTorch BigGraph BIBREF15. The graph model is trained on the full Wikidata graph, using a translation operator to represent relations. Figure FIGREF23 visualizes the locality of the author embeddings.
To derive the author embeddings, we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items. If a book has multiple authors, the embedding of the first author for which an embedding is available is used. Following this method, we are able to retrieve embeddings for 72% of the books in the training and test set (see Table TABREF21).
Experiments ::: Pre-trained German Language Model
Although the pre-trained BERT language models are multilingual and, therefore, support German, we rely on a BERT model that was exclusively pre-trained on German text, as published by the German company Deepset AI. This model was trained from scratch on the German Wikipedia, news articles and court decisions. Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks (GermEval2018-Fine and GermEval 2014).
Experiments ::: Model Architecture
Our neural network architecture, shown in Figure FIGREF31, resembles the original BERT model BIBREF1 and combines text- and non-text features with a multilayer perceptron (MLP).
The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.
The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1.
Experiments ::: Implementation
Training is performed with batch size $b=16$, dropout probability $d=0.1$, learning rate $\eta =2^{-5}$ (Adam optimizer) and 5 training epochs. These hyperparameters are the ones proposed by BIBREF1 for BERT fine-tuning. We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately. All experiments are run on a GeForce GTX 1080 Ti (11 GB), whereby a single training epoch takes up to 10min. If there is no single label for which prediction probability is above the classification threshold, the most popular label (Literatur & Unterhaltung) is used as prediction.
Experiments ::: Baseline
To compare against a relatively simple baseline, we implemented a Logistic Regression classifier chain from scikit-learn BIBREF16. This baseline uses the text only and converts it to TF-IDF vectors. As with the BERT model, it performs 8-class multi-label classification for sub-task A and 343-class multi-label classification for sub-task B, ignoring the hierarchical aspect in the labels.
Results
Table TABREF34 shows the results of our experiments. As prescribed by the shared task, the essential evaluation metric is the micro-averaged F1-score. All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set. For the final submission to the shared task competition, the best-scoring setup is used and trained on the training and validation sets combined.
We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both sub-tasks. With an F1-score of 87.20 for task A and 64.70 for task B, the setup using BERT-German with metadata features and author embeddings (1) outperforms all other setups. Looking at the precision score only, BERT-German with metadata features (2) but without author embeddings performs best.
In comparison to the baseline (7), our evaluation shows that deep transformer models like BERT considerably outperform the classical TF-IDF approach, also when the input is the same (using the title and blurb only). BERT-German (4) and BERT-Multilingual (5) are only using text-based features (title and blurb), whereby the text representations of the BERT-layers are directly fed into the classification layer.
To establish the information gain of author embeddings, we train a linear classifier on author embeddings, using this as the only feature. The author-only model (6) is exclusively evaluated on books for which author embeddings are available, so the numbers are based on a slightly smaller validation set. With an F1-score of 61.99 and 32.13 for sub-tasks A and B, respectively, the author model yields the worst result. However, the information contained in the author embeddings help improve performance, as the results of the best-performing setup show. When evaluating the best model (1) only on books for that author embeddings are available, we find a further improvement with respect to F1 score (task A: from 87.20 to 87.81; task B: 64.70 to 65.74).
Discussion
The best performing setup uses BERT-German with metadata features and author embeddings. In this setup the most data is made available to the model, indicating that, perhaps not surprisingly, more data leads to better classification performance. We expect that having access to the actual text of the book will further increase performance. The average number of words per blurb is 95 and only 0.25% of books exceed our cut-off point of 300 words per blurb. In addition, the distribution of labeled books is imbalanced, i.e. for many classes only a single digit number of training instances exist (Fig. FIGREF38). Thus, this task can be considered a low resource scenario, where including related data (such as author embeddings and author identity features such as gender and academic title) or making certain characteristics more explicit (title and blurb length statistics) helps. Furthermore, it should be noted that the blurbs do not provide summary-like abstracts of the book, but instead act as teasers, intended to persuade the reader to buy the book.
As reflected by the recent popularity of deep transformer models, they considerably outperform the Logistic Regression baseline using TF-IDF representation of the blurbs. However, for the simpler sub-task A, the performance difference between the baseline model and the multilingual BERT model is only six points, while consuming only a fraction of BERT's computing resources. The BERT model trained for German (from scratch) outperforms the multilingual BERT model by under three points for sub-task A and over six points for sub-task B, confirming the findings reported by the creators of the BERT-German models for earlier GermEval shared tasks.
While generally on par for sub-task A, for sub-task B there is a relatively large discrepancy between precision and recall scores. In all setups, precision is considerably higher than recall. We expect this to be down to the fact that for some of the 343 labels in sub-task B, there are very few instances. This means that if the classifier predicts a certain label, it is likely to be correct (i. e., high precision), but for many instances having low-frequency labels, this low-frequency label is never predicted (i. e., low recall).
As mentioned in Section SECREF30, we neglect the hierarchical nature of the labels and flatten the hierarchy (with a depth of three levels) to a single set of 343 labels for sub-task B. We expect this to have negative impact on performance, because it allows a scenario in which, for a particular book, we predict a label from the first level and also a non-matching label from the second level of the hierarchy. The example Coenzym Q10 (Table TABREF36) demonstrates this issue. While the model correctly predicts the second level label Gesundheit & Ernährung (health & diet), it misses the corresponding first level label Ratgeber (advisor). Given the model's tendency to higher precision rather than recall in sub-task B, as a post-processing step we may want to take the most detailed label (on the third level of the hierarchy) to be correct and manually fix the higher level labels accordingly. We leave this for future work and note that we expect this to improve performance, but it is hard to say by how much. We hypothesize that an MLP with more and bigger layers could improve the classification performance. However, this would increase the number of parameters to be trained, and thus requires more training data (such as the book's text itself, or a summary of it).
Conclusions and Future Work
In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available.
Future work comprises the use of hierarchical information in a post-processing step to refine the classification. Another promising approach to tackle the low resource problem for task B would be to use label embeddings. Many labels are similar and semantically related. The relationships between labels can be utilized to model in a joint embedding space BIBREF17. However, a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online. The Random House taxonomy (see above) includes category names, i. e., labels, that relate to several different dimensions including, among others, genre, topic and function.
This work is done in the context of a larger project that develops a platform for curation technologies. Under the umbrella of this project, the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular, specialized processing workflows, including parameterising the included pipelines. Depending on content type and genre, it may make sense to apply OCR post-processing (for digitized books from centuries ago), machine translation (for content in languages unknown to the user), information extraction, or other particular and specialized procedures. Constructing such a generic ontology for digital content is a challenging task, and classification performance is heavily dependent on input data (both in shape and amount) and on the nature of the ontology to be used (in the case of this paper, the one predefined by the shared task organisers). In the context of our project, we continue to work towards a maximally generic content ontology, and at the same time towards applied classification architectures such as the one presented in this paper.
Acknowledgments
This research is funded by the German Federal Ministry of Education and Research (BMBF) through the “Unternehmen Region”, instrument “Wachstumskern” QURATOR (grant no. 03WKDA1A). We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript. | up to four percentage points in accuracy |
bed527bcb0dd5424e69563fba4ae7e6ea1fca26a | bed527bcb0dd5424e69563fba4ae7e6ea1fca26a_0 | Q: What dataset do they use?
Text: Introduction
With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task.
In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task.
Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT; BIBREF1) outperformed previous state-of-the-art methods by a large margin on various NLP tasks. We adopt BERT for text-based classification and extend the model with additional metadata provided in the context of the shared task, such as author, publisher, publishing date, etc.
A key contribution of this paper is the inclusion of additional (meta) data using a state-of-the-art approach for text processing. Being a transfer learning approach, it facilitates the task solution with external knowledge for a setup in which relatively little training data is available. More precisely, we enrich BERT, as our pre-trained text representation model, with knowledge graph embeddings that are based on Wikidata BIBREF2, add metadata provided by the shared task organisers (title, author(s), publishing date, etc.) and collect additional information on authors for this particular document classification task. As we do not rely on text-based features alone but also utilize document metadata, we consider this as a document classification problem. The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task.
Related Work
A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution.
With regard to text and document classification, BERT (Bidirectional Encoder Representations from Transformers) BIBREF1 is a pre-trained embedding model that yields state of the art results in a wide span of NLP tasks, such as question answering, textual entailment and natural language inference learning BIBREF8. BIBREF9 are among the first to apply BERT to document classification. Acknowledging challenges like incorporating syntactic information, or predicting multiple labels, they describe how they adapt BERT for the document classification task. In general, they introduce a fully-connected layer over the final hidden state that contains one neuron each representing an input token, and further optimize the model choosing soft-max classifier parameters to weight the hidden state layer. They report state of the art results in experiments based on four popular datasets. An approach exploiting Hierarchical Attention Networks is presented by BIBREF10. Their model introduces a hierarchical structure to represent the hierarchical nature of a document. BIBREF10 derive attention on the word and sentence level, which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations. They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points. BIBREF11 (the organisers of the GermEval 2019 shared task on hierarchical text classification) use shallow capsule networks, reporting that these work well on structured data for example in the field of visual inference, and outperform CNNs, LSTMs and SVMs in this area. They use the Web of Science (WOS) dataset and introduce a new real-world scenario dataset called Blurb Genre Collection (BGC).
With regard to external resources to enrich the classification task, BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding. They use structural knowledge represented by Wikidata entities and their relation to each other. A mix of large-scale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13, considering lexical, syntactic, and structural information. BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia. Based on a bag of words approach, they derive a thesaurus of concepts from Wikipedia and use it for document expansion. The resulting document representation improves the performance of an SVM classifier for predicting text categories.
Dataset and Task
Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:
A title.
A list of authors. The average number of authors per book is 1.13, with most books (14,970) having a single author and one outlier with 28 authors.
A short descriptive text (blurb) with an average length of 95 words.
A URL pointing to a page on the publisher's website.
An ISBN number.
The date of publication.
The books are labeled according to the hierarchy used by the German publisher Random House. This taxonomy includes a mix of genre and topical categories. It has eight top-level genre categories, 93 on the second level and 242 on the most detailed third level. The eight top-level labels are `Ganzheitliches Bewusstsein' (holistic awareness/consciousness), `Künste' (arts), `Sachbuch' (non-fiction), `Kinderbuch & Jugendbuch' (children and young adults), `Ratgeber' (counselor/advisor), `Literatur & Unterhaltung' (literature and entertainment), `Glaube & Ethik' (faith and ethics), `Architektur & Garten' (architecture and garden). We refer to the shared task description for details on the lower levels of the ontology.
Note that we do not have access to any of the full texts. Hence, we use the blurbs as input for BERT. Given the relatively short average length of the blurbs, this considerably decreases the amount of data points available for a single book.
The shared task is divided into two sub-task. Sub-task A is to classify a book, using the information provided as explained above, according to the top-level of the taxonomy, selecting one or more of the eight labels. Sub-task B is to classify a book according to the detailed taxonomy, specifying labels on the second and third level of the taxonomy as well (in total 343 labels). This renders both sub-tasks a multi-label classification task.
Experiments
As indicated in Section SECREF1, we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification. We use the pre-trained models and enrich them with additional metadata and tune the models for both classification sub-tasks.
Experiments ::: Metadata Features
In addition to the metadata provided by the organisers of the shared task (see Section SECREF3), we add the following features.
Number of authors.
Academic title (Dr. or Prof.), if found in author names (0 or 1).
Number of words in title.
Number of words in blurb.
Length of longest word in blurb.
Mean word length in blurb.
Median word length in blurb.
Age in years after publication date.
Probability of first author being male or female based on the Gender-by-Name dataset. Available for 87% of books in training set (see Table TABREF21).
The statistics (length, average, etc.) regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier. For example, books labeled `Kinderbuch & Jugendbuch' (children and young adults) have a title that is on average 5.47 words long, whereas books labeled `Künste' (arts) on average have shorter titles of 3.46 words. The binary feature for academic title is based on the assumption that academics are more likely to write non-fiction. The gender feature is included to explore (and potentially exploit) whether or not there is a gender-bias for particular genres.
Experiments ::: Author Embeddings
Whereas one should not judge a book by its cover, we argue that additional information on the author can support the classification task. Authors often adhere to their specific style of writing and are likely to specialize in a specific genre.
To be precise, we want to include author identity information, which can be retrieved by selecting particular properties from, for example, the Wikidata knowledge graph (such as date of birth, nationality, or other biographical features). A drawback of this approach, however, is that one has to manually select and filter those properties that improve classification performance. This is why, instead, we follow a more generic approach and utilize automatically generated graph embeddings as author representations.
Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph. The node distance can be interpreted as topical similarity between the corresponding authors.
We rely on pre-trained embeddings based on PyTorch BigGraph BIBREF15. The graph model is trained on the full Wikidata graph, using a translation operator to represent relations. Figure FIGREF23 visualizes the locality of the author embeddings.
To derive the author embeddings, we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items. If a book has multiple authors, the embedding of the first author for which an embedding is available is used. Following this method, we are able to retrieve embeddings for 72% of the books in the training and test set (see Table TABREF21).
Experiments ::: Pre-trained German Language Model
Although the pre-trained BERT language models are multilingual and, therefore, support German, we rely on a BERT model that was exclusively pre-trained on German text, as published by the German company Deepset AI. This model was trained from scratch on the German Wikipedia, news articles and court decisions. Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks (GermEval2018-Fine and GermEval 2014).
Experiments ::: Model Architecture
Our neural network architecture, shown in Figure FIGREF31, resembles the original BERT model BIBREF1 and combines text- and non-text features with a multilayer perceptron (MLP).
The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.
The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1.
Experiments ::: Implementation
Training is performed with batch size $b=16$, dropout probability $d=0.1$, learning rate $\eta =2^{-5}$ (Adam optimizer) and 5 training epochs. These hyperparameters are the ones proposed by BIBREF1 for BERT fine-tuning. We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately. All experiments are run on a GeForce GTX 1080 Ti (11 GB), whereby a single training epoch takes up to 10min. If there is no single label for which prediction probability is above the classification threshold, the most popular label (Literatur & Unterhaltung) is used as prediction.
Experiments ::: Baseline
To compare against a relatively simple baseline, we implemented a Logistic Regression classifier chain from scikit-learn BIBREF16. This baseline uses the text only and converts it to TF-IDF vectors. As with the BERT model, it performs 8-class multi-label classification for sub-task A and 343-class multi-label classification for sub-task B, ignoring the hierarchical aspect in the labels.
Results
Table TABREF34 shows the results of our experiments. As prescribed by the shared task, the essential evaluation metric is the micro-averaged F1-score. All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set. For the final submission to the shared task competition, the best-scoring setup is used and trained on the training and validation sets combined.
We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both sub-tasks. With an F1-score of 87.20 for task A and 64.70 for task B, the setup using BERT-German with metadata features and author embeddings (1) outperforms all other setups. Looking at the precision score only, BERT-German with metadata features (2) but without author embeddings performs best.
In comparison to the baseline (7), our evaluation shows that deep transformer models like BERT considerably outperform the classical TF-IDF approach, also when the input is the same (using the title and blurb only). BERT-German (4) and BERT-Multilingual (5) are only using text-based features (title and blurb), whereby the text representations of the BERT-layers are directly fed into the classification layer.
To establish the information gain of author embeddings, we train a linear classifier on author embeddings, using this as the only feature. The author-only model (6) is exclusively evaluated on books for which author embeddings are available, so the numbers are based on a slightly smaller validation set. With an F1-score of 61.99 and 32.13 for sub-tasks A and B, respectively, the author model yields the worst result. However, the information contained in the author embeddings help improve performance, as the results of the best-performing setup show. When evaluating the best model (1) only on books for that author embeddings are available, we find a further improvement with respect to F1 score (task A: from 87.20 to 87.81; task B: 64.70 to 65.74).
Discussion
The best performing setup uses BERT-German with metadata features and author embeddings. In this setup the most data is made available to the model, indicating that, perhaps not surprisingly, more data leads to better classification performance. We expect that having access to the actual text of the book will further increase performance. The average number of words per blurb is 95 and only 0.25% of books exceed our cut-off point of 300 words per blurb. In addition, the distribution of labeled books is imbalanced, i.e. for many classes only a single digit number of training instances exist (Fig. FIGREF38). Thus, this task can be considered a low resource scenario, where including related data (such as author embeddings and author identity features such as gender and academic title) or making certain characteristics more explicit (title and blurb length statistics) helps. Furthermore, it should be noted that the blurbs do not provide summary-like abstracts of the book, but instead act as teasers, intended to persuade the reader to buy the book.
As reflected by the recent popularity of deep transformer models, they considerably outperform the Logistic Regression baseline using TF-IDF representation of the blurbs. However, for the simpler sub-task A, the performance difference between the baseline model and the multilingual BERT model is only six points, while consuming only a fraction of BERT's computing resources. The BERT model trained for German (from scratch) outperforms the multilingual BERT model by under three points for sub-task A and over six points for sub-task B, confirming the findings reported by the creators of the BERT-German models for earlier GermEval shared tasks.
While generally on par for sub-task A, for sub-task B there is a relatively large discrepancy between precision and recall scores. In all setups, precision is considerably higher than recall. We expect this to be down to the fact that for some of the 343 labels in sub-task B, there are very few instances. This means that if the classifier predicts a certain label, it is likely to be correct (i. e., high precision), but for many instances having low-frequency labels, this low-frequency label is never predicted (i. e., low recall).
As mentioned in Section SECREF30, we neglect the hierarchical nature of the labels and flatten the hierarchy (with a depth of three levels) to a single set of 343 labels for sub-task B. We expect this to have negative impact on performance, because it allows a scenario in which, for a particular book, we predict a label from the first level and also a non-matching label from the second level of the hierarchy. The example Coenzym Q10 (Table TABREF36) demonstrates this issue. While the model correctly predicts the second level label Gesundheit & Ernährung (health & diet), it misses the corresponding first level label Ratgeber (advisor). Given the model's tendency to higher precision rather than recall in sub-task B, as a post-processing step we may want to take the most detailed label (on the third level of the hierarchy) to be correct and manually fix the higher level labels accordingly. We leave this for future work and note that we expect this to improve performance, but it is hard to say by how much. We hypothesize that an MLP with more and bigger layers could improve the classification performance. However, this would increase the number of parameters to be trained, and thus requires more training data (such as the book's text itself, or a summary of it).
Conclusions and Future Work
In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available.
Future work comprises the use of hierarchical information in a post-processing step to refine the classification. Another promising approach to tackle the low resource problem for task B would be to use label embeddings. Many labels are similar and semantically related. The relationships between labels can be utilized to model in a joint embedding space BIBREF17. However, a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online. The Random House taxonomy (see above) includes category names, i. e., labels, that relate to several different dimensions including, among others, genre, topic and function.
This work is done in the context of a larger project that develops a platform for curation technologies. Under the umbrella of this project, the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular, specialized processing workflows, including parameterising the included pipelines. Depending on content type and genre, it may make sense to apply OCR post-processing (for digitized books from centuries ago), machine translation (for content in languages unknown to the user), information extraction, or other particular and specialized procedures. Constructing such a generic ontology for digital content is a challenging task, and classification performance is heavily dependent on input data (both in shape and amount) and on the nature of the ontology to be used (in the case of this paper, the one predefined by the shared task organisers). In the context of our project, we continue to work towards a maximally generic content ontology, and at the same time towards applied classification architectures such as the one presented in this paper.
Acknowledgments
This research is funded by the German Federal Ministry of Education and Research (BMBF) through the “Unternehmen Region”, instrument “Wachstumskern” QURATOR (grant no. 03WKDA1A). We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript. | 2019 GermEval shared task on hierarchical text classification |
bed527bcb0dd5424e69563fba4ae7e6ea1fca26a | bed527bcb0dd5424e69563fba4ae7e6ea1fca26a_1 | Q: What dataset do they use?
Text: Introduction
With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task.
In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task.
Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT; BIBREF1) outperformed previous state-of-the-art methods by a large margin on various NLP tasks. We adopt BERT for text-based classification and extend the model with additional metadata provided in the context of the shared task, such as author, publisher, publishing date, etc.
A key contribution of this paper is the inclusion of additional (meta) data using a state-of-the-art approach for text processing. Being a transfer learning approach, it facilitates the task solution with external knowledge for a setup in which relatively little training data is available. More precisely, we enrich BERT, as our pre-trained text representation model, with knowledge graph embeddings that are based on Wikidata BIBREF2, add metadata provided by the shared task organisers (title, author(s), publishing date, etc.) and collect additional information on authors for this particular document classification task. As we do not rely on text-based features alone but also utilize document metadata, we consider this as a document classification problem. The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task.
Related Work
A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution.
With regard to text and document classification, BERT (Bidirectional Encoder Representations from Transformers) BIBREF1 is a pre-trained embedding model that yields state of the art results in a wide span of NLP tasks, such as question answering, textual entailment and natural language inference learning BIBREF8. BIBREF9 are among the first to apply BERT to document classification. Acknowledging challenges like incorporating syntactic information, or predicting multiple labels, they describe how they adapt BERT for the document classification task. In general, they introduce a fully-connected layer over the final hidden state that contains one neuron each representing an input token, and further optimize the model choosing soft-max classifier parameters to weight the hidden state layer. They report state of the art results in experiments based on four popular datasets. An approach exploiting Hierarchical Attention Networks is presented by BIBREF10. Their model introduces a hierarchical structure to represent the hierarchical nature of a document. BIBREF10 derive attention on the word and sentence level, which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations. They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points. BIBREF11 (the organisers of the GermEval 2019 shared task on hierarchical text classification) use shallow capsule networks, reporting that these work well on structured data for example in the field of visual inference, and outperform CNNs, LSTMs and SVMs in this area. They use the Web of Science (WOS) dataset and introduce a new real-world scenario dataset called Blurb Genre Collection (BGC).
With regard to external resources to enrich the classification task, BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding. They use structural knowledge represented by Wikidata entities and their relation to each other. A mix of large-scale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13, considering lexical, syntactic, and structural information. BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia. Based on a bag of words approach, they derive a thesaurus of concepts from Wikipedia and use it for document expansion. The resulting document representation improves the performance of an SVM classifier for predicting text categories.
Dataset and Task
Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:
A title.
A list of authors. The average number of authors per book is 1.13, with most books (14,970) having a single author and one outlier with 28 authors.
A short descriptive text (blurb) with an average length of 95 words.
A URL pointing to a page on the publisher's website.
An ISBN number.
The date of publication.
The books are labeled according to the hierarchy used by the German publisher Random House. This taxonomy includes a mix of genre and topical categories. It has eight top-level genre categories, 93 on the second level and 242 on the most detailed third level. The eight top-level labels are `Ganzheitliches Bewusstsein' (holistic awareness/consciousness), `Künste' (arts), `Sachbuch' (non-fiction), `Kinderbuch & Jugendbuch' (children and young adults), `Ratgeber' (counselor/advisor), `Literatur & Unterhaltung' (literature and entertainment), `Glaube & Ethik' (faith and ethics), `Architektur & Garten' (architecture and garden). We refer to the shared task description for details on the lower levels of the ontology.
Note that we do not have access to any of the full texts. Hence, we use the blurbs as input for BERT. Given the relatively short average length of the blurbs, this considerably decreases the amount of data points available for a single book.
The shared task is divided into two sub-task. Sub-task A is to classify a book, using the information provided as explained above, according to the top-level of the taxonomy, selecting one or more of the eight labels. Sub-task B is to classify a book according to the detailed taxonomy, specifying labels on the second and third level of the taxonomy as well (in total 343 labels). This renders both sub-tasks a multi-label classification task.
Experiments
As indicated in Section SECREF1, we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification. We use the pre-trained models and enrich them with additional metadata and tune the models for both classification sub-tasks.
Experiments ::: Metadata Features
In addition to the metadata provided by the organisers of the shared task (see Section SECREF3), we add the following features.
Number of authors.
Academic title (Dr. or Prof.), if found in author names (0 or 1).
Number of words in title.
Number of words in blurb.
Length of longest word in blurb.
Mean word length in blurb.
Median word length in blurb.
Age in years after publication date.
Probability of first author being male or female based on the Gender-by-Name dataset. Available for 87% of books in training set (see Table TABREF21).
The statistics (length, average, etc.) regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier. For example, books labeled `Kinderbuch & Jugendbuch' (children and young adults) have a title that is on average 5.47 words long, whereas books labeled `Künste' (arts) on average have shorter titles of 3.46 words. The binary feature for academic title is based on the assumption that academics are more likely to write non-fiction. The gender feature is included to explore (and potentially exploit) whether or not there is a gender-bias for particular genres.
Experiments ::: Author Embeddings
Whereas one should not judge a book by its cover, we argue that additional information on the author can support the classification task. Authors often adhere to their specific style of writing and are likely to specialize in a specific genre.
To be precise, we want to include author identity information, which can be retrieved by selecting particular properties from, for example, the Wikidata knowledge graph (such as date of birth, nationality, or other biographical features). A drawback of this approach, however, is that one has to manually select and filter those properties that improve classification performance. This is why, instead, we follow a more generic approach and utilize automatically generated graph embeddings as author representations.
Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph. The node distance can be interpreted as topical similarity between the corresponding authors.
We rely on pre-trained embeddings based on PyTorch BigGraph BIBREF15. The graph model is trained on the full Wikidata graph, using a translation operator to represent relations. Figure FIGREF23 visualizes the locality of the author embeddings.
To derive the author embeddings, we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items. If a book has multiple authors, the embedding of the first author for which an embedding is available is used. Following this method, we are able to retrieve embeddings for 72% of the books in the training and test set (see Table TABREF21).
Experiments ::: Pre-trained German Language Model
Although the pre-trained BERT language models are multilingual and, therefore, support German, we rely on a BERT model that was exclusively pre-trained on German text, as published by the German company Deepset AI. This model was trained from scratch on the German Wikipedia, news articles and court decisions. Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks (GermEval2018-Fine and GermEval 2014).
Experiments ::: Model Architecture
Our neural network architecture, shown in Figure FIGREF31, resembles the original BERT model BIBREF1 and combines text- and non-text features with a multilayer perceptron (MLP).
The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.
The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1.
Experiments ::: Implementation
Training is performed with batch size $b=16$, dropout probability $d=0.1$, learning rate $\eta =2^{-5}$ (Adam optimizer) and 5 training epochs. These hyperparameters are the ones proposed by BIBREF1 for BERT fine-tuning. We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately. All experiments are run on a GeForce GTX 1080 Ti (11 GB), whereby a single training epoch takes up to 10min. If there is no single label for which prediction probability is above the classification threshold, the most popular label (Literatur & Unterhaltung) is used as prediction.
Experiments ::: Baseline
To compare against a relatively simple baseline, we implemented a Logistic Regression classifier chain from scikit-learn BIBREF16. This baseline uses the text only and converts it to TF-IDF vectors. As with the BERT model, it performs 8-class multi-label classification for sub-task A and 343-class multi-label classification for sub-task B, ignoring the hierarchical aspect in the labels.
Results
Table TABREF34 shows the results of our experiments. As prescribed by the shared task, the essential evaluation metric is the micro-averaged F1-score. All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set. For the final submission to the shared task competition, the best-scoring setup is used and trained on the training and validation sets combined.
We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both sub-tasks. With an F1-score of 87.20 for task A and 64.70 for task B, the setup using BERT-German with metadata features and author embeddings (1) outperforms all other setups. Looking at the precision score only, BERT-German with metadata features (2) but without author embeddings performs best.
In comparison to the baseline (7), our evaluation shows that deep transformer models like BERT considerably outperform the classical TF-IDF approach, also when the input is the same (using the title and blurb only). BERT-German (4) and BERT-Multilingual (5) are only using text-based features (title and blurb), whereby the text representations of the BERT-layers are directly fed into the classification layer.
To establish the information gain of author embeddings, we train a linear classifier on author embeddings, using this as the only feature. The author-only model (6) is exclusively evaluated on books for which author embeddings are available, so the numbers are based on a slightly smaller validation set. With an F1-score of 61.99 and 32.13 for sub-tasks A and B, respectively, the author model yields the worst result. However, the information contained in the author embeddings help improve performance, as the results of the best-performing setup show. When evaluating the best model (1) only on books for that author embeddings are available, we find a further improvement with respect to F1 score (task A: from 87.20 to 87.81; task B: 64.70 to 65.74).
Discussion
The best performing setup uses BERT-German with metadata features and author embeddings. In this setup the most data is made available to the model, indicating that, perhaps not surprisingly, more data leads to better classification performance. We expect that having access to the actual text of the book will further increase performance. The average number of words per blurb is 95 and only 0.25% of books exceed our cut-off point of 300 words per blurb. In addition, the distribution of labeled books is imbalanced, i.e. for many classes only a single digit number of training instances exist (Fig. FIGREF38). Thus, this task can be considered a low resource scenario, where including related data (such as author embeddings and author identity features such as gender and academic title) or making certain characteristics more explicit (title and blurb length statistics) helps. Furthermore, it should be noted that the blurbs do not provide summary-like abstracts of the book, but instead act as teasers, intended to persuade the reader to buy the book.
As reflected by the recent popularity of deep transformer models, they considerably outperform the Logistic Regression baseline using TF-IDF representation of the blurbs. However, for the simpler sub-task A, the performance difference between the baseline model and the multilingual BERT model is only six points, while consuming only a fraction of BERT's computing resources. The BERT model trained for German (from scratch) outperforms the multilingual BERT model by under three points for sub-task A and over six points for sub-task B, confirming the findings reported by the creators of the BERT-German models for earlier GermEval shared tasks.
While generally on par for sub-task A, for sub-task B there is a relatively large discrepancy between precision and recall scores. In all setups, precision is considerably higher than recall. We expect this to be down to the fact that for some of the 343 labels in sub-task B, there are very few instances. This means that if the classifier predicts a certain label, it is likely to be correct (i. e., high precision), but for many instances having low-frequency labels, this low-frequency label is never predicted (i. e., low recall).
As mentioned in Section SECREF30, we neglect the hierarchical nature of the labels and flatten the hierarchy (with a depth of three levels) to a single set of 343 labels for sub-task B. We expect this to have negative impact on performance, because it allows a scenario in which, for a particular book, we predict a label from the first level and also a non-matching label from the second level of the hierarchy. The example Coenzym Q10 (Table TABREF36) demonstrates this issue. While the model correctly predicts the second level label Gesundheit & Ernährung (health & diet), it misses the corresponding first level label Ratgeber (advisor). Given the model's tendency to higher precision rather than recall in sub-task B, as a post-processing step we may want to take the most detailed label (on the third level of the hierarchy) to be correct and manually fix the higher level labels accordingly. We leave this for future work and note that we expect this to improve performance, but it is hard to say by how much. We hypothesize that an MLP with more and bigger layers could improve the classification performance. However, this would increase the number of parameters to be trained, and thus requires more training data (such as the book's text itself, or a summary of it).
Conclusions and Future Work
In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available.
Future work comprises the use of hierarchical information in a post-processing step to refine the classification. Another promising approach to tackle the low resource problem for task B would be to use label embeddings. Many labels are similar and semantically related. The relationships between labels can be utilized to model in a joint embedding space BIBREF17. However, a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online. The Random House taxonomy (see above) includes category names, i. e., labels, that relate to several different dimensions including, among others, genre, topic and function.
This work is done in the context of a larger project that develops a platform for curation technologies. Under the umbrella of this project, the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular, specialized processing workflows, including parameterising the included pipelines. Depending on content type and genre, it may make sense to apply OCR post-processing (for digitized books from centuries ago), machine translation (for content in languages unknown to the user), information extraction, or other particular and specialized procedures. Constructing such a generic ontology for digital content is a challenging task, and classification performance is heavily dependent on input data (both in shape and amount) and on the nature of the ontology to be used (in the case of this paper, the one predefined by the shared task organisers). In the context of our project, we continue to work towards a maximally generic content ontology, and at the same time towards applied classification architectures such as the one presented in this paper.
Acknowledgments
This research is funded by the German Federal Ministry of Education and Research (BMBF) through the “Unternehmen Region”, instrument “Wachstumskern” QURATOR (grant no. 03WKDA1A). We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript. | GermEval 2019 shared task |
aeab5797b541850e692f11e79167928db80de1ea | aeab5797b541850e692f11e79167928db80de1ea_0 | Q: How do they combine text representations with the knowledge graph embeddings?
Text: Introduction
With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task.
In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task.
Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT; BIBREF1) outperformed previous state-of-the-art methods by a large margin on various NLP tasks. We adopt BERT for text-based classification and extend the model with additional metadata provided in the context of the shared task, such as author, publisher, publishing date, etc.
A key contribution of this paper is the inclusion of additional (meta) data using a state-of-the-art approach for text processing. Being a transfer learning approach, it facilitates the task solution with external knowledge for a setup in which relatively little training data is available. More precisely, we enrich BERT, as our pre-trained text representation model, with knowledge graph embeddings that are based on Wikidata BIBREF2, add metadata provided by the shared task organisers (title, author(s), publishing date, etc.) and collect additional information on authors for this particular document classification task. As we do not rely on text-based features alone but also utilize document metadata, we consider this as a document classification problem. The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task.
Related Work
A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution.
With regard to text and document classification, BERT (Bidirectional Encoder Representations from Transformers) BIBREF1 is a pre-trained embedding model that yields state of the art results in a wide span of NLP tasks, such as question answering, textual entailment and natural language inference learning BIBREF8. BIBREF9 are among the first to apply BERT to document classification. Acknowledging challenges like incorporating syntactic information, or predicting multiple labels, they describe how they adapt BERT for the document classification task. In general, they introduce a fully-connected layer over the final hidden state that contains one neuron each representing an input token, and further optimize the model choosing soft-max classifier parameters to weight the hidden state layer. They report state of the art results in experiments based on four popular datasets. An approach exploiting Hierarchical Attention Networks is presented by BIBREF10. Their model introduces a hierarchical structure to represent the hierarchical nature of a document. BIBREF10 derive attention on the word and sentence level, which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations. They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points. BIBREF11 (the organisers of the GermEval 2019 shared task on hierarchical text classification) use shallow capsule networks, reporting that these work well on structured data for example in the field of visual inference, and outperform CNNs, LSTMs and SVMs in this area. They use the Web of Science (WOS) dataset and introduce a new real-world scenario dataset called Blurb Genre Collection (BGC).
With regard to external resources to enrich the classification task, BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding. They use structural knowledge represented by Wikidata entities and their relation to each other. A mix of large-scale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13, considering lexical, syntactic, and structural information. BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia. Based on a bag of words approach, they derive a thesaurus of concepts from Wikipedia and use it for document expansion. The resulting document representation improves the performance of an SVM classifier for predicting text categories.
Dataset and Task
Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:
A title.
A list of authors. The average number of authors per book is 1.13, with most books (14,970) having a single author and one outlier with 28 authors.
A short descriptive text (blurb) with an average length of 95 words.
A URL pointing to a page on the publisher's website.
An ISBN number.
The date of publication.
The books are labeled according to the hierarchy used by the German publisher Random House. This taxonomy includes a mix of genre and topical categories. It has eight top-level genre categories, 93 on the second level and 242 on the most detailed third level. The eight top-level labels are `Ganzheitliches Bewusstsein' (holistic awareness/consciousness), `Künste' (arts), `Sachbuch' (non-fiction), `Kinderbuch & Jugendbuch' (children and young adults), `Ratgeber' (counselor/advisor), `Literatur & Unterhaltung' (literature and entertainment), `Glaube & Ethik' (faith and ethics), `Architektur & Garten' (architecture and garden). We refer to the shared task description for details on the lower levels of the ontology.
Note that we do not have access to any of the full texts. Hence, we use the blurbs as input for BERT. Given the relatively short average length of the blurbs, this considerably decreases the amount of data points available for a single book.
The shared task is divided into two sub-task. Sub-task A is to classify a book, using the information provided as explained above, according to the top-level of the taxonomy, selecting one or more of the eight labels. Sub-task B is to classify a book according to the detailed taxonomy, specifying labels on the second and third level of the taxonomy as well (in total 343 labels). This renders both sub-tasks a multi-label classification task.
Experiments
As indicated in Section SECREF1, we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification. We use the pre-trained models and enrich them with additional metadata and tune the models for both classification sub-tasks.
Experiments ::: Metadata Features
In addition to the metadata provided by the organisers of the shared task (see Section SECREF3), we add the following features.
Number of authors.
Academic title (Dr. or Prof.), if found in author names (0 or 1).
Number of words in title.
Number of words in blurb.
Length of longest word in blurb.
Mean word length in blurb.
Median word length in blurb.
Age in years after publication date.
Probability of first author being male or female based on the Gender-by-Name dataset. Available for 87% of books in training set (see Table TABREF21).
The statistics (length, average, etc.) regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier. For example, books labeled `Kinderbuch & Jugendbuch' (children and young adults) have a title that is on average 5.47 words long, whereas books labeled `Künste' (arts) on average have shorter titles of 3.46 words. The binary feature for academic title is based on the assumption that academics are more likely to write non-fiction. The gender feature is included to explore (and potentially exploit) whether or not there is a gender-bias for particular genres.
Experiments ::: Author Embeddings
Whereas one should not judge a book by its cover, we argue that additional information on the author can support the classification task. Authors often adhere to their specific style of writing and are likely to specialize in a specific genre.
To be precise, we want to include author identity information, which can be retrieved by selecting particular properties from, for example, the Wikidata knowledge graph (such as date of birth, nationality, or other biographical features). A drawback of this approach, however, is that one has to manually select and filter those properties that improve classification performance. This is why, instead, we follow a more generic approach and utilize automatically generated graph embeddings as author representations.
Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph. The node distance can be interpreted as topical similarity between the corresponding authors.
We rely on pre-trained embeddings based on PyTorch BigGraph BIBREF15. The graph model is trained on the full Wikidata graph, using a translation operator to represent relations. Figure FIGREF23 visualizes the locality of the author embeddings.
To derive the author embeddings, we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items. If a book has multiple authors, the embedding of the first author for which an embedding is available is used. Following this method, we are able to retrieve embeddings for 72% of the books in the training and test set (see Table TABREF21).
Experiments ::: Pre-trained German Language Model
Although the pre-trained BERT language models are multilingual and, therefore, support German, we rely on a BERT model that was exclusively pre-trained on German text, as published by the German company Deepset AI. This model was trained from scratch on the German Wikipedia, news articles and court decisions. Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks (GermEval2018-Fine and GermEval 2014).
Experiments ::: Model Architecture
Our neural network architecture, shown in Figure FIGREF31, resembles the original BERT model BIBREF1 and combines text- and non-text features with a multilayer perceptron (MLP).
The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.
The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1.
Experiments ::: Implementation
Training is performed with batch size $b=16$, dropout probability $d=0.1$, learning rate $\eta =2^{-5}$ (Adam optimizer) and 5 training epochs. These hyperparameters are the ones proposed by BIBREF1 for BERT fine-tuning. We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately. All experiments are run on a GeForce GTX 1080 Ti (11 GB), whereby a single training epoch takes up to 10min. If there is no single label for which prediction probability is above the classification threshold, the most popular label (Literatur & Unterhaltung) is used as prediction.
Experiments ::: Baseline
To compare against a relatively simple baseline, we implemented a Logistic Regression classifier chain from scikit-learn BIBREF16. This baseline uses the text only and converts it to TF-IDF vectors. As with the BERT model, it performs 8-class multi-label classification for sub-task A and 343-class multi-label classification for sub-task B, ignoring the hierarchical aspect in the labels.
Results
Table TABREF34 shows the results of our experiments. As prescribed by the shared task, the essential evaluation metric is the micro-averaged F1-score. All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set. For the final submission to the shared task competition, the best-scoring setup is used and trained on the training and validation sets combined.
We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both sub-tasks. With an F1-score of 87.20 for task A and 64.70 for task B, the setup using BERT-German with metadata features and author embeddings (1) outperforms all other setups. Looking at the precision score only, BERT-German with metadata features (2) but without author embeddings performs best.
In comparison to the baseline (7), our evaluation shows that deep transformer models like BERT considerably outperform the classical TF-IDF approach, also when the input is the same (using the title and blurb only). BERT-German (4) and BERT-Multilingual (5) are only using text-based features (title and blurb), whereby the text representations of the BERT-layers are directly fed into the classification layer.
To establish the information gain of author embeddings, we train a linear classifier on author embeddings, using this as the only feature. The author-only model (6) is exclusively evaluated on books for which author embeddings are available, so the numbers are based on a slightly smaller validation set. With an F1-score of 61.99 and 32.13 for sub-tasks A and B, respectively, the author model yields the worst result. However, the information contained in the author embeddings help improve performance, as the results of the best-performing setup show. When evaluating the best model (1) only on books for that author embeddings are available, we find a further improvement with respect to F1 score (task A: from 87.20 to 87.81; task B: 64.70 to 65.74).
Discussion
The best performing setup uses BERT-German with metadata features and author embeddings. In this setup the most data is made available to the model, indicating that, perhaps not surprisingly, more data leads to better classification performance. We expect that having access to the actual text of the book will further increase performance. The average number of words per blurb is 95 and only 0.25% of books exceed our cut-off point of 300 words per blurb. In addition, the distribution of labeled books is imbalanced, i.e. for many classes only a single digit number of training instances exist (Fig. FIGREF38). Thus, this task can be considered a low resource scenario, where including related data (such as author embeddings and author identity features such as gender and academic title) or making certain characteristics more explicit (title and blurb length statistics) helps. Furthermore, it should be noted that the blurbs do not provide summary-like abstracts of the book, but instead act as teasers, intended to persuade the reader to buy the book.
As reflected by the recent popularity of deep transformer models, they considerably outperform the Logistic Regression baseline using TF-IDF representation of the blurbs. However, for the simpler sub-task A, the performance difference between the baseline model and the multilingual BERT model is only six points, while consuming only a fraction of BERT's computing resources. The BERT model trained for German (from scratch) outperforms the multilingual BERT model by under three points for sub-task A and over six points for sub-task B, confirming the findings reported by the creators of the BERT-German models for earlier GermEval shared tasks.
While generally on par for sub-task A, for sub-task B there is a relatively large discrepancy between precision and recall scores. In all setups, precision is considerably higher than recall. We expect this to be down to the fact that for some of the 343 labels in sub-task B, there are very few instances. This means that if the classifier predicts a certain label, it is likely to be correct (i. e., high precision), but for many instances having low-frequency labels, this low-frequency label is never predicted (i. e., low recall).
As mentioned in Section SECREF30, we neglect the hierarchical nature of the labels and flatten the hierarchy (with a depth of three levels) to a single set of 343 labels for sub-task B. We expect this to have negative impact on performance, because it allows a scenario in which, for a particular book, we predict a label from the first level and also a non-matching label from the second level of the hierarchy. The example Coenzym Q10 (Table TABREF36) demonstrates this issue. While the model correctly predicts the second level label Gesundheit & Ernährung (health & diet), it misses the corresponding first level label Ratgeber (advisor). Given the model's tendency to higher precision rather than recall in sub-task B, as a post-processing step we may want to take the most detailed label (on the third level of the hierarchy) to be correct and manually fix the higher level labels accordingly. We leave this for future work and note that we expect this to improve performance, but it is hard to say by how much. We hypothesize that an MLP with more and bigger layers could improve the classification performance. However, this would increase the number of parameters to be trained, and thus requires more training data (such as the book's text itself, or a summary of it).
Conclusions and Future Work
In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available.
Future work comprises the use of hierarchical information in a post-processing step to refine the classification. Another promising approach to tackle the low resource problem for task B would be to use label embeddings. Many labels are similar and semantically related. The relationships between labels can be utilized to model in a joint embedding space BIBREF17. However, a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online. The Random House taxonomy (see above) includes category names, i. e., labels, that relate to several different dimensions including, among others, genre, topic and function.
This work is done in the context of a larger project that develops a platform for curation technologies. Under the umbrella of this project, the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular, specialized processing workflows, including parameterising the included pipelines. Depending on content type and genre, it may make sense to apply OCR post-processing (for digitized books from centuries ago), machine translation (for content in languages unknown to the user), information extraction, or other particular and specialized procedures. Constructing such a generic ontology for digital content is a challenging task, and classification performance is heavily dependent on input data (both in shape and amount) and on the nature of the ontology to be used (in the case of this paper, the one predefined by the shared task organisers). In the context of our project, we continue to work towards a maximally generic content ontology, and at the same time towards applied classification architectures such as the one presented in this paper.
Acknowledgments
This research is funded by the German Federal Ministry of Education and Research (BMBF) through the “Unternehmen Region”, instrument “Wachstumskern” QURATOR (grant no. 03WKDA1A). We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript. | all three representations are concatenated and passed into a MLP |
cda4612b4bda3538d19f4b43dde7bc30c1eda4e5 | cda4612b4bda3538d19f4b43dde7bc30c1eda4e5_0 | Q: What are the traditional methods to identifying important attributes?
Text: The problem we solve in this paper
Knowledge graph(KG) has been proposed for several years and its most prominent application is in web search, for example, Google search triggers a certain entity card when a user's query matches or mentions an entity based on some statistical model. The core potential of a knowledge graph is about its capability of reasoning and inferring, and we have not seen revolutionary breakthrough in such areas yet. One main obstacle is obviously the lack of sufficient knowledge graph data, including entities, entities' descriptions, entities' attributes, and relationship between entities. A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the community's innovation and hardworking. On the other hand, many less demanding applications have great potential benefiting from the availability of information from the knowledge graph, such as query understanding and document understanding in information retrieval/search engines, simple inference in question answering systems, and easy reasoning in domain-limited decision support tools. Not only academy, but also industry companies have been heavily investing in knowledge graphs, such as Google's knowledge graph, Amazon's product graph, Facebook's Graph API, IBM's Watson, and Microsoft's Satori etc.
In the existing knowledge graph, such as Wikidata and DBpedia, usually attributes do not have order or priorities, and we don't know which attributes are more important and of more interest to users. Such importance score of attributes is a vital piece of information in many applications of knowledge graph. The most important application is the triggered entity card in search engine when a customer's query gets hit for an entity. An entity usually has a large amount of attributes, but an entity card has limited space and can only show the most significant information; attribute importance's presence can make the displaying of an entity card easy to implement. Attribute importance also has great potential of playing a significant role in search engine, how to decide the matching score between the query and attribute values. If the query matches a very important attribute, and the relevance contribution from such a match should be higher than matching an ignorable attribute. Another application is in e-commerce communications, and one buyer initiates a communication cycle with a seller by sending a product enquiry. Writing the enquiry on a mobile phone is inconvenient and automatic composing assistance has great potential of improving customer experience by alleviating the writing burden. In the product enquiry, customers need to specify their requirements and ask questions about products, and their requirements and questions are usually about the most important attributes of the products. If we can identify out important attributes of products, we can help customers to draft the enquiry automatically to reduce their input time.
Related Research
Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.
As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model.
Just a few works were designated to specifically address the problem of computing attribute rankings in a given Knowledge Graph. Ibminer BIBREF9 introduced a tool for infobox(alias of an entity card) template suggestion, which collected attributes from different sources and then sorted them by popularity based on their co-occurrences in the dataset. In BIBREF10 , using the structured knowledge base, intermediate features were computed, including the importance or popularity of each entity type, IDF computation for each attribute on a global basis, IDF computation for entity types etc., and then the features were aggregated to train a classifier. Also, a similar approach in BIBREF11 was designed with more features extracted from GoogleSuggestChars data. In BIBREF12 , Ali et al. introduced a new set of features that utilizes semantic information about entities as well as information from top-ranked documents from a general search engine. In order to experiment their approach, they collected a dataset by exploiting Wikipedia infoboxes, whose ordering of attributes reflect the collaborative effort of a large community of users, which might not be accurate.
What we propose and what we have done
There have been broad researches on entity detection, relationship extraction, and also missing relationship prediction. For example: BIBREF13 , BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs. Some research has been performed on attribute extraction, such as BIBREF16 and BIBREF4 ; the latter one is quite special that it also simultaneously computes the attribute importance. As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions, we found only a few existing research, all of which used simple co-occurrences to rank entity attributes. In reality, many knowledge graphs do not contain attribute importance information, for example, in the most famous Wikidata, a large amount of entities have many attributes, and it is difficult to know which attributes are significant and deserve more attention. In this research we focus on identifying important attributes in existing knowledge graphs. Specifically, we propose a new method of using extra user generated data source for evaluating the attribute importance, and we use the recently proposed state-of-the-art word/sub-word embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs. And then we use the statistics obtained from the matching to compare the attribute importance. Our method has general extensibility to any knowledge graph without attribute importance. When there is a possibility of finding external textual data source, our proposed method will work, even if the external data does not exactly match the attribute textual data, since the vector embedding performs semantic matching and does not require exact string matching.
The remaining of the paper is organized as follows: Section SECREF2 explains our proposed method in detail, including what kind of external data is required, and how to process the external data, and also how to perform the semantic matching and how to rank the attributes by statistics. Section SECREF3 introduces our experimentations, including our experimentation setup, data introduction and experimental result compared to other methods we do not employ. Section SECREF3 also briefly introduces our real world application scenario in e-commerce communication. Section SECREF4 draws the conclusion from our experimentations and analysis, and also we point out promising future research directions.
Our proposed Method
In this section, we will introduce our proposed method in detail. We use our application scenario to explain the logic behind the method, but the scope is not limited to our use case, and it is possible to extend to any existing knowledge graph without attribute importance information.
Application Scenario
Alibaba.com is currently the world's largest cross-border business to business(B2B) E-commerce platform and it supports 17 languages for customers from all over the world. On the website, English is the dorminant language and accounts for around 50% of the traffic. The website has already accumulated a very large knowledge graph of products, and the entity here is the product or the product category; and every entity has lots of information such as the entity name, images and many attributes without ordering information. The entities are also connected by taxonomy structure and similar products usually belong to the same category/sub-category.
Since the B2B procurement usually involves a large amount of money, the business will be a long process beginning with a product enquiry. Generally speaking, when customers are interested in some product, they will start a communication cycle with a seller by sending a product enquiry to the seller. In the product enquiry, customers will specify their requirements and ask questions about the product. Their requirements and questions usually refer to the most important attributes of the product. Fig. FIGREF5 shows an enquery example. Alibaba.com has accumulated tens of millions of product enquires, and we would like to leverage these information, in combination of the product knowledge graph we have, to figure out the most important attributes for each category of products.
In our application scenario, the product knowledge graph is the existing knowledge graph and the enquiry data is the external textual data source. From now on, we will use our application scenario to explain the details of our proposed algorithm.
We propose an unsupervised learning framework for extracting important product attributes from product enquiries. By calculating the semantic similarity between each enquiry sentence and each attribute of the product to which the enquiry corresponds to, we identify the product attributes that the customer cares about most.
The attributes described in the enquiry may contain attribute names or attribute values or other expressions, for example, either the word “color” or a color instance word “purple” is mentioned. Therefore, when calculating the semantic similarity between enquiry sentences and product attributes, we need both attribute names and attribute values. The same as any other knowledge graph, the product attributes in our knowledge graph we use contain noises and mistakes. We need to clean and normalize the attribute data before consuming it. We will introduce the detail of our data cleaning process in Section SECREF14 .
FastText Introduction
FastText is a library created by the Facebook Research for efficient learning of word representations and sentence classification. Here, we just use the word representation functionality of it.
FastText models morphology by considering subword units, and representing words by a sum of its character n-grams BIBREF17 . In the original model the authors choose to use the binary logistic loss and the loss for a single instance is written as below: INLINEFORM0
By denoting the logistic loss function INLINEFORM0 , the loss over a sentence is: INLINEFORM1
The scoring function between a word INLINEFORM0 and a context word INLINEFORM1 is: INLINEFORM2
In the above functions, INLINEFORM0 is a set of negative examples sampled from the vocabulary, INLINEFORM1 is the set of indices of words surrounding word INLINEFORM2 , INLINEFORM3 is the set of n-grams appearing in word INLINEFORM4 , INLINEFORM5 is the size of the dictionary we have for n-grams, INLINEFORM6 is a vector representation to each n-gram INLINEFORM7 .
Compared with word2vec or glove, FastText has following advantages:
It is able to cover rare words and out-of-vocabulary(OOV) words. Since the basic modeling units in FastText are ngrams, and both rare words and OOV ones can obtain efficient word representations from their composing ngrams. Word2vec and glove both fail to provide accurate vector representations for these words. In our application, the training data is written by end customers, and there are many misspellings which easily become OOV words.
Character n-grams embeddings tend to perform superior to word2vec and glove on smaller datasets.
FastText is more efficient and its training is relatively fast.
Matching
In this section, how to compute the matching between an enquiry sentence and a product attribute is explained in detail. Our explanation here is for a certain product category, and other categories are the same.
As you can see in Fig. FIGREF12 , each sentence is compared with each attribute of a product category that the product belongs to. We now get a score between a sentence INLINEFORM0 and an attribute INLINEFORM1 , INLINEFORM2 INLINEFORM3
where INLINEFORM0 is all the possible values for this INLINEFORM1 , INLINEFORM2 is the word vector for INLINEFORM3 . According to this formula, we can get top two attributes whose scores are above the threshold INLINEFORM4 for each sentence. We choose two attributes instead of one because there may be more than one attribute for each sentence. In addition, some sentences are greetings or self-introduction and do not contain the attribute information of the product, so we require that the score to be higher than a certain threshold.
Data introduction
For our knowledge graph data, entity(product) attributes can be roughly divided into clusters of transaction order specific ones and product specific ones, in this paper, we choose the product specific ones for further study. We also need to point out that we only focus on the recommended communication language on the Alibaba.com platform, which is English.
To construct the evaluation dataset, top 14 categories are first chosen based on their business promotion features, and 3 millions typical products under each category were then chosen to form the attribute candidates. After preprocessing and basic filtering, top product specific attributes from the 14 different categories are chosen to be manually labeled by our annotators.
For each category, annotators each are asked to choose at most 10 important attributes from buyers perspective. After all annotators complete their annotations, attributes are then sorted according to the summed votes. In the end, 111 important attributes from the 14 categories are kept for final evaluation.
Outside of the evaluation explained in this paper, we actually have performed the matching on more than 4,000 catetories covering more than 100 million products and more than 20 million enquires. Due to limited annotation resources, we can only sample a small numbered categories(14 here) to evaluate the proposed algorithm here.
Data preprocessing
The product enquiries and attributes data preprocessing is shown in Algorithm 1. algorithmAlgorithm Data Preprocess Algorithm [1] INLINEFORM0 INLINEFORM1 : INLINEFORM2 INLINEFORM3 INLINEFORM4 : INLINEFORM5 Invalid INLINEFORM6 filter INLINEFORM7 Split INLINEFORM8 to sentences sentence INLINEFORM9 in INLINEFORM10 INLINEFORM11 INLINEFORM12 return INLINEFORM13
Firstly, for every product enquiry, we convert the original html textual data into the plain text. Secondly we filter out the useless enquires, such as non-English enquires and spams. The regular expressions and spam detection are used to detect non-English enquiries and spams respectively. Thirdly we get sentence list INLINEFORM0 with spliting every enquiry into sentences as described in section 2.2. Then for every sentence INLINEFORM1 in INLINEFORM2 , we need to do extra three processes: a)Spelling Correction. b)Regular Measures and Numbers. c)Stop Words Dropping.
Spelling Correction. Since quite a lot of the product enquires and self-filled attributes were misspelled, we have replaced the exact words by fuzzyfied search using Levenshtein distance. The method uses fuzzyfied search, only if the exact match is not found. Some attributes are actually the same, such as "type" and "product type", we merge these same attributes by judging whether the attributes are contained.
Regular Measures and Numbers. Attributes of number type have their values composed of numbers and units, such as INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , etc. We replace all numbers (in any notation, e.g., floating point, scientific, arithmetical expression, etc.) with a unique token ( INLINEFORM4 ). For the same reason, each unit of measure is replaced with a corresponding token, eg., INLINEFORM5 is replaced with centimeter area.
Stop Words Dropping. Stop words appear to be of little value in the proposed matching algorithm. By removing the stop words we can focus on the important words instead. In our business scenario, we built a stop words list for foreign trade e-commerce.
Finally, we get the valid sentences INLINEFORM0 .
Proposed method vs previous methods
The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.
TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.
Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.
GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec.
Proposed method: the detail of our proposed algorithm has been carefully explained in Section SECREF2 . There are several thresholds we need to pick in the experimentation setup. Based on trial and error analysis, we choose 0.75 as the sentence and attribute similarity threshold, which balances the precision and recall relatively well. In our application, due to product enquiry length limitation, customers usually don't refer to more than five attributes in their initial approach to the seller, we choose to keep 5 most important attributes for each category.
Evaluation is conducted by comparing the output of the systems with the manual annotated answers, and we calculate the precision and recall rate. INLINEFORM0 INLINEFORM1
where INLINEFORM0 is the manually labeled attributes , INLINEFORM1 is the detected important attributes.
Table 1 depicts the algorithm performance of each category and the overall average metrics among all categories for our approach and other methods. It can be observed that our proposed method achieves the best performance. The average F1-measure of our approach is 0.47, while the average F1-measure values of “GloVe”, “word2vect” and "TextRank" are 0.46, 0.42 and 0.20 respectively.
Result Analysis
In all our experiments, we find that FastText method outperforms other methods. By analyzing all results, we observe that semantic similarity based methods are more effective than the previous method which we implemented based on TextRank. This conclusion is understandable because lots of enquiries do not simply mention attribute words exactly, but some semantically related words are also used.
Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes.
We also applied the detected attributes in the automatic enquiry generation task and we obtained significantly better generated enquiries compared to previous rigid templates. Due to space limitation, we skip the explanation and leave it for future publications.
Conclusions and Future work
In this paper, we proposed a new general method of identifying important attributes for entities from a knowledge graph. This is a relatively new task and our proposed method of using external textual data and performing semantic matching via word/sub-word embeddings obtained better result compared to other work of using naive string matching and counting. In addition, we also successfully applied the detected important attributes in our real world application of smart composing. In summary, the method is extensible to any knowledge graph without attribute importance information and outperforms previous method.
In future work, there are two major areas with potential of improving the detection accuracy. The first one is about sentence splitting. What we are trying to get is semantic cohesive unit, which can be used to match an attribute, and there might be more comprehensive method than the simple splitting by sentence ending punctuations. The second one is about improving the word embedding quality. We have implemented an in-house improved version of Fasttext, which is adapted to our data source. It is highly possible to use the improved word embedding on purpose of obtaining higher semantic matching precision. As for the application, we will try to use more statistical models in the natural language generation part of the smart composing framework of consuming the detected important attributes. | automated attribute-value extraction, score the attributes using the Bayes model, evaluate their importance with several different frequency metrics, aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model, OntoRank algorithm |
cda4612b4bda3538d19f4b43dde7bc30c1eda4e5 | cda4612b4bda3538d19f4b43dde7bc30c1eda4e5_1 | Q: What are the traditional methods to identifying important attributes?
Text: The problem we solve in this paper
Knowledge graph(KG) has been proposed for several years and its most prominent application is in web search, for example, Google search triggers a certain entity card when a user's query matches or mentions an entity based on some statistical model. The core potential of a knowledge graph is about its capability of reasoning and inferring, and we have not seen revolutionary breakthrough in such areas yet. One main obstacle is obviously the lack of sufficient knowledge graph data, including entities, entities' descriptions, entities' attributes, and relationship between entities. A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the community's innovation and hardworking. On the other hand, many less demanding applications have great potential benefiting from the availability of information from the knowledge graph, such as query understanding and document understanding in information retrieval/search engines, simple inference in question answering systems, and easy reasoning in domain-limited decision support tools. Not only academy, but also industry companies have been heavily investing in knowledge graphs, such as Google's knowledge graph, Amazon's product graph, Facebook's Graph API, IBM's Watson, and Microsoft's Satori etc.
In the existing knowledge graph, such as Wikidata and DBpedia, usually attributes do not have order or priorities, and we don't know which attributes are more important and of more interest to users. Such importance score of attributes is a vital piece of information in many applications of knowledge graph. The most important application is the triggered entity card in search engine when a customer's query gets hit for an entity. An entity usually has a large amount of attributes, but an entity card has limited space and can only show the most significant information; attribute importance's presence can make the displaying of an entity card easy to implement. Attribute importance also has great potential of playing a significant role in search engine, how to decide the matching score between the query and attribute values. If the query matches a very important attribute, and the relevance contribution from such a match should be higher than matching an ignorable attribute. Another application is in e-commerce communications, and one buyer initiates a communication cycle with a seller by sending a product enquiry. Writing the enquiry on a mobile phone is inconvenient and automatic composing assistance has great potential of improving customer experience by alleviating the writing burden. In the product enquiry, customers need to specify their requirements and ask questions about products, and their requirements and questions are usually about the most important attributes of the products. If we can identify out important attributes of products, we can help customers to draft the enquiry automatically to reduce their input time.
Related Research
Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.
As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model.
Just a few works were designated to specifically address the problem of computing attribute rankings in a given Knowledge Graph. Ibminer BIBREF9 introduced a tool for infobox(alias of an entity card) template suggestion, which collected attributes from different sources and then sorted them by popularity based on their co-occurrences in the dataset. In BIBREF10 , using the structured knowledge base, intermediate features were computed, including the importance or popularity of each entity type, IDF computation for each attribute on a global basis, IDF computation for entity types etc., and then the features were aggregated to train a classifier. Also, a similar approach in BIBREF11 was designed with more features extracted from GoogleSuggestChars data. In BIBREF12 , Ali et al. introduced a new set of features that utilizes semantic information about entities as well as information from top-ranked documents from a general search engine. In order to experiment their approach, they collected a dataset by exploiting Wikipedia infoboxes, whose ordering of attributes reflect the collaborative effort of a large community of users, which might not be accurate.
What we propose and what we have done
There have been broad researches on entity detection, relationship extraction, and also missing relationship prediction. For example: BIBREF13 , BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs. Some research has been performed on attribute extraction, such as BIBREF16 and BIBREF4 ; the latter one is quite special that it also simultaneously computes the attribute importance. As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions, we found only a few existing research, all of which used simple co-occurrences to rank entity attributes. In reality, many knowledge graphs do not contain attribute importance information, for example, in the most famous Wikidata, a large amount of entities have many attributes, and it is difficult to know which attributes are significant and deserve more attention. In this research we focus on identifying important attributes in existing knowledge graphs. Specifically, we propose a new method of using extra user generated data source for evaluating the attribute importance, and we use the recently proposed state-of-the-art word/sub-word embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs. And then we use the statistics obtained from the matching to compare the attribute importance. Our method has general extensibility to any knowledge graph without attribute importance. When there is a possibility of finding external textual data source, our proposed method will work, even if the external data does not exactly match the attribute textual data, since the vector embedding performs semantic matching and does not require exact string matching.
The remaining of the paper is organized as follows: Section SECREF2 explains our proposed method in detail, including what kind of external data is required, and how to process the external data, and also how to perform the semantic matching and how to rank the attributes by statistics. Section SECREF3 introduces our experimentations, including our experimentation setup, data introduction and experimental result compared to other methods we do not employ. Section SECREF3 also briefly introduces our real world application scenario in e-commerce communication. Section SECREF4 draws the conclusion from our experimentations and analysis, and also we point out promising future research directions.
Our proposed Method
In this section, we will introduce our proposed method in detail. We use our application scenario to explain the logic behind the method, but the scope is not limited to our use case, and it is possible to extend to any existing knowledge graph without attribute importance information.
Application Scenario
Alibaba.com is currently the world's largest cross-border business to business(B2B) E-commerce platform and it supports 17 languages for customers from all over the world. On the website, English is the dorminant language and accounts for around 50% of the traffic. The website has already accumulated a very large knowledge graph of products, and the entity here is the product or the product category; and every entity has lots of information such as the entity name, images and many attributes without ordering information. The entities are also connected by taxonomy structure and similar products usually belong to the same category/sub-category.
Since the B2B procurement usually involves a large amount of money, the business will be a long process beginning with a product enquiry. Generally speaking, when customers are interested in some product, they will start a communication cycle with a seller by sending a product enquiry to the seller. In the product enquiry, customers will specify their requirements and ask questions about the product. Their requirements and questions usually refer to the most important attributes of the product. Fig. FIGREF5 shows an enquery example. Alibaba.com has accumulated tens of millions of product enquires, and we would like to leverage these information, in combination of the product knowledge graph we have, to figure out the most important attributes for each category of products.
In our application scenario, the product knowledge graph is the existing knowledge graph and the enquiry data is the external textual data source. From now on, we will use our application scenario to explain the details of our proposed algorithm.
We propose an unsupervised learning framework for extracting important product attributes from product enquiries. By calculating the semantic similarity between each enquiry sentence and each attribute of the product to which the enquiry corresponds to, we identify the product attributes that the customer cares about most.
The attributes described in the enquiry may contain attribute names or attribute values or other expressions, for example, either the word “color” or a color instance word “purple” is mentioned. Therefore, when calculating the semantic similarity between enquiry sentences and product attributes, we need both attribute names and attribute values. The same as any other knowledge graph, the product attributes in our knowledge graph we use contain noises and mistakes. We need to clean and normalize the attribute data before consuming it. We will introduce the detail of our data cleaning process in Section SECREF14 .
FastText Introduction
FastText is a library created by the Facebook Research for efficient learning of word representations and sentence classification. Here, we just use the word representation functionality of it.
FastText models morphology by considering subword units, and representing words by a sum of its character n-grams BIBREF17 . In the original model the authors choose to use the binary logistic loss and the loss for a single instance is written as below: INLINEFORM0
By denoting the logistic loss function INLINEFORM0 , the loss over a sentence is: INLINEFORM1
The scoring function between a word INLINEFORM0 and a context word INLINEFORM1 is: INLINEFORM2
In the above functions, INLINEFORM0 is a set of negative examples sampled from the vocabulary, INLINEFORM1 is the set of indices of words surrounding word INLINEFORM2 , INLINEFORM3 is the set of n-grams appearing in word INLINEFORM4 , INLINEFORM5 is the size of the dictionary we have for n-grams, INLINEFORM6 is a vector representation to each n-gram INLINEFORM7 .
Compared with word2vec or glove, FastText has following advantages:
It is able to cover rare words and out-of-vocabulary(OOV) words. Since the basic modeling units in FastText are ngrams, and both rare words and OOV ones can obtain efficient word representations from their composing ngrams. Word2vec and glove both fail to provide accurate vector representations for these words. In our application, the training data is written by end customers, and there are many misspellings which easily become OOV words.
Character n-grams embeddings tend to perform superior to word2vec and glove on smaller datasets.
FastText is more efficient and its training is relatively fast.
Matching
In this section, how to compute the matching between an enquiry sentence and a product attribute is explained in detail. Our explanation here is for a certain product category, and other categories are the same.
As you can see in Fig. FIGREF12 , each sentence is compared with each attribute of a product category that the product belongs to. We now get a score between a sentence INLINEFORM0 and an attribute INLINEFORM1 , INLINEFORM2 INLINEFORM3
where INLINEFORM0 is all the possible values for this INLINEFORM1 , INLINEFORM2 is the word vector for INLINEFORM3 . According to this formula, we can get top two attributes whose scores are above the threshold INLINEFORM4 for each sentence. We choose two attributes instead of one because there may be more than one attribute for each sentence. In addition, some sentences are greetings or self-introduction and do not contain the attribute information of the product, so we require that the score to be higher than a certain threshold.
Data introduction
For our knowledge graph data, entity(product) attributes can be roughly divided into clusters of transaction order specific ones and product specific ones, in this paper, we choose the product specific ones for further study. We also need to point out that we only focus on the recommended communication language on the Alibaba.com platform, which is English.
To construct the evaluation dataset, top 14 categories are first chosen based on their business promotion features, and 3 millions typical products under each category were then chosen to form the attribute candidates. After preprocessing and basic filtering, top product specific attributes from the 14 different categories are chosen to be manually labeled by our annotators.
For each category, annotators each are asked to choose at most 10 important attributes from buyers perspective. After all annotators complete their annotations, attributes are then sorted according to the summed votes. In the end, 111 important attributes from the 14 categories are kept for final evaluation.
Outside of the evaluation explained in this paper, we actually have performed the matching on more than 4,000 catetories covering more than 100 million products and more than 20 million enquires. Due to limited annotation resources, we can only sample a small numbered categories(14 here) to evaluate the proposed algorithm here.
Data preprocessing
The product enquiries and attributes data preprocessing is shown in Algorithm 1. algorithmAlgorithm Data Preprocess Algorithm [1] INLINEFORM0 INLINEFORM1 : INLINEFORM2 INLINEFORM3 INLINEFORM4 : INLINEFORM5 Invalid INLINEFORM6 filter INLINEFORM7 Split INLINEFORM8 to sentences sentence INLINEFORM9 in INLINEFORM10 INLINEFORM11 INLINEFORM12 return INLINEFORM13
Firstly, for every product enquiry, we convert the original html textual data into the plain text. Secondly we filter out the useless enquires, such as non-English enquires and spams. The regular expressions and spam detection are used to detect non-English enquiries and spams respectively. Thirdly we get sentence list INLINEFORM0 with spliting every enquiry into sentences as described in section 2.2. Then for every sentence INLINEFORM1 in INLINEFORM2 , we need to do extra three processes: a)Spelling Correction. b)Regular Measures and Numbers. c)Stop Words Dropping.
Spelling Correction. Since quite a lot of the product enquires and self-filled attributes were misspelled, we have replaced the exact words by fuzzyfied search using Levenshtein distance. The method uses fuzzyfied search, only if the exact match is not found. Some attributes are actually the same, such as "type" and "product type", we merge these same attributes by judging whether the attributes are contained.
Regular Measures and Numbers. Attributes of number type have their values composed of numbers and units, such as INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , etc. We replace all numbers (in any notation, e.g., floating point, scientific, arithmetical expression, etc.) with a unique token ( INLINEFORM4 ). For the same reason, each unit of measure is replaced with a corresponding token, eg., INLINEFORM5 is replaced with centimeter area.
Stop Words Dropping. Stop words appear to be of little value in the proposed matching algorithm. By removing the stop words we can focus on the important words instead. In our business scenario, we built a stop words list for foreign trade e-commerce.
Finally, we get the valid sentences INLINEFORM0 .
Proposed method vs previous methods
The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.
TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.
Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.
GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec.
Proposed method: the detail of our proposed algorithm has been carefully explained in Section SECREF2 . There are several thresholds we need to pick in the experimentation setup. Based on trial and error analysis, we choose 0.75 as the sentence and attribute similarity threshold, which balances the precision and recall relatively well. In our application, due to product enquiry length limitation, customers usually don't refer to more than five attributes in their initial approach to the seller, we choose to keep 5 most important attributes for each category.
Evaluation is conducted by comparing the output of the systems with the manual annotated answers, and we calculate the precision and recall rate. INLINEFORM0 INLINEFORM1
where INLINEFORM0 is the manually labeled attributes , INLINEFORM1 is the detected important attributes.
Table 1 depicts the algorithm performance of each category and the overall average metrics among all categories for our approach and other methods. It can be observed that our proposed method achieves the best performance. The average F1-measure of our approach is 0.47, while the average F1-measure values of “GloVe”, “word2vect” and "TextRank" are 0.46, 0.42 and 0.20 respectively.
Result Analysis
In all our experiments, we find that FastText method outperforms other methods. By analyzing all results, we observe that semantic similarity based methods are more effective than the previous method which we implemented based on TextRank. This conclusion is understandable because lots of enquiries do not simply mention attribute words exactly, but some semantically related words are also used.
Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes.
We also applied the detected attributes in the automatic enquiry generation task and we obtained significantly better generated enquiries compared to previous rigid templates. Due to space limitation, we skip the explanation and leave it for future publications.
Conclusions and Future work
In this paper, we proposed a new general method of identifying important attributes for entities from a knowledge graph. This is a relatively new task and our proposed method of using external textual data and performing semantic matching via word/sub-word embeddings obtained better result compared to other work of using naive string matching and counting. In addition, we also successfully applied the detected important attributes in our real world application of smart composing. In summary, the method is extensible to any knowledge graph without attribute importance information and outperforms previous method.
In future work, there are two major areas with potential of improving the detection accuracy. The first one is about sentence splitting. What we are trying to get is semantic cohesive unit, which can be used to match an attribute, and there might be more comprehensive method than the simple splitting by sentence ending punctuations. The second one is about improving the word embedding quality. We have implemented an in-house improved version of Fasttext, which is adapted to our data source. It is highly possible to use the improved word embedding on purpose of obtaining higher semantic matching precision. As for the application, we will try to use more statistical models in the natural language generation part of the smart composing framework of consuming the detected important attributes. | TextRank, Word2vec BIBREF19, GloVe BIBREF20 |
e12674f0466f8c0da109b6076d9939b30952c7da | e12674f0466f8c0da109b6076d9939b30952c7da_0 | Q: What do you use to calculate word/sub-word embeddings
Text: The problem we solve in this paper
Knowledge graph(KG) has been proposed for several years and its most prominent application is in web search, for example, Google search triggers a certain entity card when a user's query matches or mentions an entity based on some statistical model. The core potential of a knowledge graph is about its capability of reasoning and inferring, and we have not seen revolutionary breakthrough in such areas yet. One main obstacle is obviously the lack of sufficient knowledge graph data, including entities, entities' descriptions, entities' attributes, and relationship between entities. A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the community's innovation and hardworking. On the other hand, many less demanding applications have great potential benefiting from the availability of information from the knowledge graph, such as query understanding and document understanding in information retrieval/search engines, simple inference in question answering systems, and easy reasoning in domain-limited decision support tools. Not only academy, but also industry companies have been heavily investing in knowledge graphs, such as Google's knowledge graph, Amazon's product graph, Facebook's Graph API, IBM's Watson, and Microsoft's Satori etc.
In the existing knowledge graph, such as Wikidata and DBpedia, usually attributes do not have order or priorities, and we don't know which attributes are more important and of more interest to users. Such importance score of attributes is a vital piece of information in many applications of knowledge graph. The most important application is the triggered entity card in search engine when a customer's query gets hit for an entity. An entity usually has a large amount of attributes, but an entity card has limited space and can only show the most significant information; attribute importance's presence can make the displaying of an entity card easy to implement. Attribute importance also has great potential of playing a significant role in search engine, how to decide the matching score between the query and attribute values. If the query matches a very important attribute, and the relevance contribution from such a match should be higher than matching an ignorable attribute. Another application is in e-commerce communications, and one buyer initiates a communication cycle with a seller by sending a product enquiry. Writing the enquiry on a mobile phone is inconvenient and automatic composing assistance has great potential of improving customer experience by alleviating the writing burden. In the product enquiry, customers need to specify their requirements and ask questions about products, and their requirements and questions are usually about the most important attributes of the products. If we can identify out important attributes of products, we can help customers to draft the enquiry automatically to reduce their input time.
Related Research
Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.
As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model.
Just a few works were designated to specifically address the problem of computing attribute rankings in a given Knowledge Graph. Ibminer BIBREF9 introduced a tool for infobox(alias of an entity card) template suggestion, which collected attributes from different sources and then sorted them by popularity based on their co-occurrences in the dataset. In BIBREF10 , using the structured knowledge base, intermediate features were computed, including the importance or popularity of each entity type, IDF computation for each attribute on a global basis, IDF computation for entity types etc., and then the features were aggregated to train a classifier. Also, a similar approach in BIBREF11 was designed with more features extracted from GoogleSuggestChars data. In BIBREF12 , Ali et al. introduced a new set of features that utilizes semantic information about entities as well as information from top-ranked documents from a general search engine. In order to experiment their approach, they collected a dataset by exploiting Wikipedia infoboxes, whose ordering of attributes reflect the collaborative effort of a large community of users, which might not be accurate.
What we propose and what we have done
There have been broad researches on entity detection, relationship extraction, and also missing relationship prediction. For example: BIBREF13 , BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs. Some research has been performed on attribute extraction, such as BIBREF16 and BIBREF4 ; the latter one is quite special that it also simultaneously computes the attribute importance. As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions, we found only a few existing research, all of which used simple co-occurrences to rank entity attributes. In reality, many knowledge graphs do not contain attribute importance information, for example, in the most famous Wikidata, a large amount of entities have many attributes, and it is difficult to know which attributes are significant and deserve more attention. In this research we focus on identifying important attributes in existing knowledge graphs. Specifically, we propose a new method of using extra user generated data source for evaluating the attribute importance, and we use the recently proposed state-of-the-art word/sub-word embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs. And then we use the statistics obtained from the matching to compare the attribute importance. Our method has general extensibility to any knowledge graph without attribute importance. When there is a possibility of finding external textual data source, our proposed method will work, even if the external data does not exactly match the attribute textual data, since the vector embedding performs semantic matching and does not require exact string matching.
The remaining of the paper is organized as follows: Section SECREF2 explains our proposed method in detail, including what kind of external data is required, and how to process the external data, and also how to perform the semantic matching and how to rank the attributes by statistics. Section SECREF3 introduces our experimentations, including our experimentation setup, data introduction and experimental result compared to other methods we do not employ. Section SECREF3 also briefly introduces our real world application scenario in e-commerce communication. Section SECREF4 draws the conclusion from our experimentations and analysis, and also we point out promising future research directions.
Our proposed Method
In this section, we will introduce our proposed method in detail. We use our application scenario to explain the logic behind the method, but the scope is not limited to our use case, and it is possible to extend to any existing knowledge graph without attribute importance information.
Application Scenario
Alibaba.com is currently the world's largest cross-border business to business(B2B) E-commerce platform and it supports 17 languages for customers from all over the world. On the website, English is the dorminant language and accounts for around 50% of the traffic. The website has already accumulated a very large knowledge graph of products, and the entity here is the product or the product category; and every entity has lots of information such as the entity name, images and many attributes without ordering information. The entities are also connected by taxonomy structure and similar products usually belong to the same category/sub-category.
Since the B2B procurement usually involves a large amount of money, the business will be a long process beginning with a product enquiry. Generally speaking, when customers are interested in some product, they will start a communication cycle with a seller by sending a product enquiry to the seller. In the product enquiry, customers will specify their requirements and ask questions about the product. Their requirements and questions usually refer to the most important attributes of the product. Fig. FIGREF5 shows an enquery example. Alibaba.com has accumulated tens of millions of product enquires, and we would like to leverage these information, in combination of the product knowledge graph we have, to figure out the most important attributes for each category of products.
In our application scenario, the product knowledge graph is the existing knowledge graph and the enquiry data is the external textual data source. From now on, we will use our application scenario to explain the details of our proposed algorithm.
We propose an unsupervised learning framework for extracting important product attributes from product enquiries. By calculating the semantic similarity between each enquiry sentence and each attribute of the product to which the enquiry corresponds to, we identify the product attributes that the customer cares about most.
The attributes described in the enquiry may contain attribute names or attribute values or other expressions, for example, either the word “color” or a color instance word “purple” is mentioned. Therefore, when calculating the semantic similarity between enquiry sentences and product attributes, we need both attribute names and attribute values. The same as any other knowledge graph, the product attributes in our knowledge graph we use contain noises and mistakes. We need to clean and normalize the attribute data before consuming it. We will introduce the detail of our data cleaning process in Section SECREF14 .
FastText Introduction
FastText is a library created by the Facebook Research for efficient learning of word representations and sentence classification. Here, we just use the word representation functionality of it.
FastText models morphology by considering subword units, and representing words by a sum of its character n-grams BIBREF17 . In the original model the authors choose to use the binary logistic loss and the loss for a single instance is written as below: INLINEFORM0
By denoting the logistic loss function INLINEFORM0 , the loss over a sentence is: INLINEFORM1
The scoring function between a word INLINEFORM0 and a context word INLINEFORM1 is: INLINEFORM2
In the above functions, INLINEFORM0 is a set of negative examples sampled from the vocabulary, INLINEFORM1 is the set of indices of words surrounding word INLINEFORM2 , INLINEFORM3 is the set of n-grams appearing in word INLINEFORM4 , INLINEFORM5 is the size of the dictionary we have for n-grams, INLINEFORM6 is a vector representation to each n-gram INLINEFORM7 .
Compared with word2vec or glove, FastText has following advantages:
It is able to cover rare words and out-of-vocabulary(OOV) words. Since the basic modeling units in FastText are ngrams, and both rare words and OOV ones can obtain efficient word representations from their composing ngrams. Word2vec and glove both fail to provide accurate vector representations for these words. In our application, the training data is written by end customers, and there are many misspellings which easily become OOV words.
Character n-grams embeddings tend to perform superior to word2vec and glove on smaller datasets.
FastText is more efficient and its training is relatively fast.
Matching
In this section, how to compute the matching between an enquiry sentence and a product attribute is explained in detail. Our explanation here is for a certain product category, and other categories are the same.
As you can see in Fig. FIGREF12 , each sentence is compared with each attribute of a product category that the product belongs to. We now get a score between a sentence INLINEFORM0 and an attribute INLINEFORM1 , INLINEFORM2 INLINEFORM3
where INLINEFORM0 is all the possible values for this INLINEFORM1 , INLINEFORM2 is the word vector for INLINEFORM3 . According to this formula, we can get top two attributes whose scores are above the threshold INLINEFORM4 for each sentence. We choose two attributes instead of one because there may be more than one attribute for each sentence. In addition, some sentences are greetings or self-introduction and do not contain the attribute information of the product, so we require that the score to be higher than a certain threshold.
Data introduction
For our knowledge graph data, entity(product) attributes can be roughly divided into clusters of transaction order specific ones and product specific ones, in this paper, we choose the product specific ones for further study. We also need to point out that we only focus on the recommended communication language on the Alibaba.com platform, which is English.
To construct the evaluation dataset, top 14 categories are first chosen based on their business promotion features, and 3 millions typical products under each category were then chosen to form the attribute candidates. After preprocessing and basic filtering, top product specific attributes from the 14 different categories are chosen to be manually labeled by our annotators.
For each category, annotators each are asked to choose at most 10 important attributes from buyers perspective. After all annotators complete their annotations, attributes are then sorted according to the summed votes. In the end, 111 important attributes from the 14 categories are kept for final evaluation.
Outside of the evaluation explained in this paper, we actually have performed the matching on more than 4,000 catetories covering more than 100 million products and more than 20 million enquires. Due to limited annotation resources, we can only sample a small numbered categories(14 here) to evaluate the proposed algorithm here.
Data preprocessing
The product enquiries and attributes data preprocessing is shown in Algorithm 1. algorithmAlgorithm Data Preprocess Algorithm [1] INLINEFORM0 INLINEFORM1 : INLINEFORM2 INLINEFORM3 INLINEFORM4 : INLINEFORM5 Invalid INLINEFORM6 filter INLINEFORM7 Split INLINEFORM8 to sentences sentence INLINEFORM9 in INLINEFORM10 INLINEFORM11 INLINEFORM12 return INLINEFORM13
Firstly, for every product enquiry, we convert the original html textual data into the plain text. Secondly we filter out the useless enquires, such as non-English enquires and spams. The regular expressions and spam detection are used to detect non-English enquiries and spams respectively. Thirdly we get sentence list INLINEFORM0 with spliting every enquiry into sentences as described in section 2.2. Then for every sentence INLINEFORM1 in INLINEFORM2 , we need to do extra three processes: a)Spelling Correction. b)Regular Measures and Numbers. c)Stop Words Dropping.
Spelling Correction. Since quite a lot of the product enquires and self-filled attributes were misspelled, we have replaced the exact words by fuzzyfied search using Levenshtein distance. The method uses fuzzyfied search, only if the exact match is not found. Some attributes are actually the same, such as "type" and "product type", we merge these same attributes by judging whether the attributes are contained.
Regular Measures and Numbers. Attributes of number type have their values composed of numbers and units, such as INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , etc. We replace all numbers (in any notation, e.g., floating point, scientific, arithmetical expression, etc.) with a unique token ( INLINEFORM4 ). For the same reason, each unit of measure is replaced with a corresponding token, eg., INLINEFORM5 is replaced with centimeter area.
Stop Words Dropping. Stop words appear to be of little value in the proposed matching algorithm. By removing the stop words we can focus on the important words instead. In our business scenario, we built a stop words list for foreign trade e-commerce.
Finally, we get the valid sentences INLINEFORM0 .
Proposed method vs previous methods
The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.
TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.
Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.
GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec.
Proposed method: the detail of our proposed algorithm has been carefully explained in Section SECREF2 . There are several thresholds we need to pick in the experimentation setup. Based on trial and error analysis, we choose 0.75 as the sentence and attribute similarity threshold, which balances the precision and recall relatively well. In our application, due to product enquiry length limitation, customers usually don't refer to more than five attributes in their initial approach to the seller, we choose to keep 5 most important attributes for each category.
Evaluation is conducted by comparing the output of the systems with the manual annotated answers, and we calculate the precision and recall rate. INLINEFORM0 INLINEFORM1
where INLINEFORM0 is the manually labeled attributes , INLINEFORM1 is the detected important attributes.
Table 1 depicts the algorithm performance of each category and the overall average metrics among all categories for our approach and other methods. It can be observed that our proposed method achieves the best performance. The average F1-measure of our approach is 0.47, while the average F1-measure values of “GloVe”, “word2vect” and "TextRank" are 0.46, 0.42 and 0.20 respectively.
Result Analysis
In all our experiments, we find that FastText method outperforms other methods. By analyzing all results, we observe that semantic similarity based methods are more effective than the previous method which we implemented based on TextRank. This conclusion is understandable because lots of enquiries do not simply mention attribute words exactly, but some semantically related words are also used.
Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes.
We also applied the detected attributes in the automatic enquiry generation task and we obtained significantly better generated enquiries compared to previous rigid templates. Due to space limitation, we skip the explanation and leave it for future publications.
Conclusions and Future work
In this paper, we proposed a new general method of identifying important attributes for entities from a knowledge graph. This is a relatively new task and our proposed method of using external textual data and performing semantic matching via word/sub-word embeddings obtained better result compared to other work of using naive string matching and counting. In addition, we also successfully applied the detected important attributes in our real world application of smart composing. In summary, the method is extensible to any knowledge graph without attribute importance information and outperforms previous method.
In future work, there are two major areas with potential of improving the detection accuracy. The first one is about sentence splitting. What we are trying to get is semantic cohesive unit, which can be used to match an attribute, and there might be more comprehensive method than the simple splitting by sentence ending punctuations. The second one is about improving the word embedding quality. We have implemented an in-house improved version of Fasttext, which is adapted to our data source. It is highly possible to use the improved word embedding on purpose of obtaining higher semantic matching precision. As for the application, we will try to use more statistical models in the natural language generation part of the smart composing framework of consuming the detected important attributes. | FastText |
9fe6339c7027a1a0caffa613adabe8b5bb6a7d4a | 9fe6339c7027a1a0caffa613adabe8b5bb6a7d4a_0 | Q: What user generated text data do you use?
Text: The problem we solve in this paper
Knowledge graph(KG) has been proposed for several years and its most prominent application is in web search, for example, Google search triggers a certain entity card when a user's query matches or mentions an entity based on some statistical model. The core potential of a knowledge graph is about its capability of reasoning and inferring, and we have not seen revolutionary breakthrough in such areas yet. One main obstacle is obviously the lack of sufficient knowledge graph data, including entities, entities' descriptions, entities' attributes, and relationship between entities. A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the community's innovation and hardworking. On the other hand, many less demanding applications have great potential benefiting from the availability of information from the knowledge graph, such as query understanding and document understanding in information retrieval/search engines, simple inference in question answering systems, and easy reasoning in domain-limited decision support tools. Not only academy, but also industry companies have been heavily investing in knowledge graphs, such as Google's knowledge graph, Amazon's product graph, Facebook's Graph API, IBM's Watson, and Microsoft's Satori etc.
In the existing knowledge graph, such as Wikidata and DBpedia, usually attributes do not have order or priorities, and we don't know which attributes are more important and of more interest to users. Such importance score of attributes is a vital piece of information in many applications of knowledge graph. The most important application is the triggered entity card in search engine when a customer's query gets hit for an entity. An entity usually has a large amount of attributes, but an entity card has limited space and can only show the most significant information; attribute importance's presence can make the displaying of an entity card easy to implement. Attribute importance also has great potential of playing a significant role in search engine, how to decide the matching score between the query and attribute values. If the query matches a very important attribute, and the relevance contribution from such a match should be higher than matching an ignorable attribute. Another application is in e-commerce communications, and one buyer initiates a communication cycle with a seller by sending a product enquiry. Writing the enquiry on a mobile phone is inconvenient and automatic composing assistance has great potential of improving customer experience by alleviating the writing burden. In the product enquiry, customers need to specify their requirements and ask questions about products, and their requirements and questions are usually about the most important attributes of the products. If we can identify out important attributes of products, we can help customers to draft the enquiry automatically to reduce their input time.
Related Research
Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.
As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model.
Just a few works were designated to specifically address the problem of computing attribute rankings in a given Knowledge Graph. Ibminer BIBREF9 introduced a tool for infobox(alias of an entity card) template suggestion, which collected attributes from different sources and then sorted them by popularity based on their co-occurrences in the dataset. In BIBREF10 , using the structured knowledge base, intermediate features were computed, including the importance or popularity of each entity type, IDF computation for each attribute on a global basis, IDF computation for entity types etc., and then the features were aggregated to train a classifier. Also, a similar approach in BIBREF11 was designed with more features extracted from GoogleSuggestChars data. In BIBREF12 , Ali et al. introduced a new set of features that utilizes semantic information about entities as well as information from top-ranked documents from a general search engine. In order to experiment their approach, they collected a dataset by exploiting Wikipedia infoboxes, whose ordering of attributes reflect the collaborative effort of a large community of users, which might not be accurate.
What we propose and what we have done
There have been broad researches on entity detection, relationship extraction, and also missing relationship prediction. For example: BIBREF13 , BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs. Some research has been performed on attribute extraction, such as BIBREF16 and BIBREF4 ; the latter one is quite special that it also simultaneously computes the attribute importance. As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions, we found only a few existing research, all of which used simple co-occurrences to rank entity attributes. In reality, many knowledge graphs do not contain attribute importance information, for example, in the most famous Wikidata, a large amount of entities have many attributes, and it is difficult to know which attributes are significant and deserve more attention. In this research we focus on identifying important attributes in existing knowledge graphs. Specifically, we propose a new method of using extra user generated data source for evaluating the attribute importance, and we use the recently proposed state-of-the-art word/sub-word embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs. And then we use the statistics obtained from the matching to compare the attribute importance. Our method has general extensibility to any knowledge graph without attribute importance. When there is a possibility of finding external textual data source, our proposed method will work, even if the external data does not exactly match the attribute textual data, since the vector embedding performs semantic matching and does not require exact string matching.
The remaining of the paper is organized as follows: Section SECREF2 explains our proposed method in detail, including what kind of external data is required, and how to process the external data, and also how to perform the semantic matching and how to rank the attributes by statistics. Section SECREF3 introduces our experimentations, including our experimentation setup, data introduction and experimental result compared to other methods we do not employ. Section SECREF3 also briefly introduces our real world application scenario in e-commerce communication. Section SECREF4 draws the conclusion from our experimentations and analysis, and also we point out promising future research directions.
Our proposed Method
In this section, we will introduce our proposed method in detail. We use our application scenario to explain the logic behind the method, but the scope is not limited to our use case, and it is possible to extend to any existing knowledge graph without attribute importance information.
Application Scenario
Alibaba.com is currently the world's largest cross-border business to business(B2B) E-commerce platform and it supports 17 languages for customers from all over the world. On the website, English is the dorminant language and accounts for around 50% of the traffic. The website has already accumulated a very large knowledge graph of products, and the entity here is the product or the product category; and every entity has lots of information such as the entity name, images and many attributes without ordering information. The entities are also connected by taxonomy structure and similar products usually belong to the same category/sub-category.
Since the B2B procurement usually involves a large amount of money, the business will be a long process beginning with a product enquiry. Generally speaking, when customers are interested in some product, they will start a communication cycle with a seller by sending a product enquiry to the seller. In the product enquiry, customers will specify their requirements and ask questions about the product. Their requirements and questions usually refer to the most important attributes of the product. Fig. FIGREF5 shows an enquery example. Alibaba.com has accumulated tens of millions of product enquires, and we would like to leverage these information, in combination of the product knowledge graph we have, to figure out the most important attributes for each category of products.
In our application scenario, the product knowledge graph is the existing knowledge graph and the enquiry data is the external textual data source. From now on, we will use our application scenario to explain the details of our proposed algorithm.
We propose an unsupervised learning framework for extracting important product attributes from product enquiries. By calculating the semantic similarity between each enquiry sentence and each attribute of the product to which the enquiry corresponds to, we identify the product attributes that the customer cares about most.
The attributes described in the enquiry may contain attribute names or attribute values or other expressions, for example, either the word “color” or a color instance word “purple” is mentioned. Therefore, when calculating the semantic similarity between enquiry sentences and product attributes, we need both attribute names and attribute values. The same as any other knowledge graph, the product attributes in our knowledge graph we use contain noises and mistakes. We need to clean and normalize the attribute data before consuming it. We will introduce the detail of our data cleaning process in Section SECREF14 .
FastText Introduction
FastText is a library created by the Facebook Research for efficient learning of word representations and sentence classification. Here, we just use the word representation functionality of it.
FastText models morphology by considering subword units, and representing words by a sum of its character n-grams BIBREF17 . In the original model the authors choose to use the binary logistic loss and the loss for a single instance is written as below: INLINEFORM0
By denoting the logistic loss function INLINEFORM0 , the loss over a sentence is: INLINEFORM1
The scoring function between a word INLINEFORM0 and a context word INLINEFORM1 is: INLINEFORM2
In the above functions, INLINEFORM0 is a set of negative examples sampled from the vocabulary, INLINEFORM1 is the set of indices of words surrounding word INLINEFORM2 , INLINEFORM3 is the set of n-grams appearing in word INLINEFORM4 , INLINEFORM5 is the size of the dictionary we have for n-grams, INLINEFORM6 is a vector representation to each n-gram INLINEFORM7 .
Compared with word2vec or glove, FastText has following advantages:
It is able to cover rare words and out-of-vocabulary(OOV) words. Since the basic modeling units in FastText are ngrams, and both rare words and OOV ones can obtain efficient word representations from their composing ngrams. Word2vec and glove both fail to provide accurate vector representations for these words. In our application, the training data is written by end customers, and there are many misspellings which easily become OOV words.
Character n-grams embeddings tend to perform superior to word2vec and glove on smaller datasets.
FastText is more efficient and its training is relatively fast.
Matching
In this section, how to compute the matching between an enquiry sentence and a product attribute is explained in detail. Our explanation here is for a certain product category, and other categories are the same.
As you can see in Fig. FIGREF12 , each sentence is compared with each attribute of a product category that the product belongs to. We now get a score between a sentence INLINEFORM0 and an attribute INLINEFORM1 , INLINEFORM2 INLINEFORM3
where INLINEFORM0 is all the possible values for this INLINEFORM1 , INLINEFORM2 is the word vector for INLINEFORM3 . According to this formula, we can get top two attributes whose scores are above the threshold INLINEFORM4 for each sentence. We choose two attributes instead of one because there may be more than one attribute for each sentence. In addition, some sentences are greetings or self-introduction and do not contain the attribute information of the product, so we require that the score to be higher than a certain threshold.
Data introduction
For our knowledge graph data, entity(product) attributes can be roughly divided into clusters of transaction order specific ones and product specific ones, in this paper, we choose the product specific ones for further study. We also need to point out that we only focus on the recommended communication language on the Alibaba.com platform, which is English.
To construct the evaluation dataset, top 14 categories are first chosen based on their business promotion features, and 3 millions typical products under each category were then chosen to form the attribute candidates. After preprocessing and basic filtering, top product specific attributes from the 14 different categories are chosen to be manually labeled by our annotators.
For each category, annotators each are asked to choose at most 10 important attributes from buyers perspective. After all annotators complete their annotations, attributes are then sorted according to the summed votes. In the end, 111 important attributes from the 14 categories are kept for final evaluation.
Outside of the evaluation explained in this paper, we actually have performed the matching on more than 4,000 catetories covering more than 100 million products and more than 20 million enquires. Due to limited annotation resources, we can only sample a small numbered categories(14 here) to evaluate the proposed algorithm here.
Data preprocessing
The product enquiries and attributes data preprocessing is shown in Algorithm 1. algorithmAlgorithm Data Preprocess Algorithm [1] INLINEFORM0 INLINEFORM1 : INLINEFORM2 INLINEFORM3 INLINEFORM4 : INLINEFORM5 Invalid INLINEFORM6 filter INLINEFORM7 Split INLINEFORM8 to sentences sentence INLINEFORM9 in INLINEFORM10 INLINEFORM11 INLINEFORM12 return INLINEFORM13
Firstly, for every product enquiry, we convert the original html textual data into the plain text. Secondly we filter out the useless enquires, such as non-English enquires and spams. The regular expressions and spam detection are used to detect non-English enquiries and spams respectively. Thirdly we get sentence list INLINEFORM0 with spliting every enquiry into sentences as described in section 2.2. Then for every sentence INLINEFORM1 in INLINEFORM2 , we need to do extra three processes: a)Spelling Correction. b)Regular Measures and Numbers. c)Stop Words Dropping.
Spelling Correction. Since quite a lot of the product enquires and self-filled attributes were misspelled, we have replaced the exact words by fuzzyfied search using Levenshtein distance. The method uses fuzzyfied search, only if the exact match is not found. Some attributes are actually the same, such as "type" and "product type", we merge these same attributes by judging whether the attributes are contained.
Regular Measures and Numbers. Attributes of number type have their values composed of numbers and units, such as INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , etc. We replace all numbers (in any notation, e.g., floating point, scientific, arithmetical expression, etc.) with a unique token ( INLINEFORM4 ). For the same reason, each unit of measure is replaced with a corresponding token, eg., INLINEFORM5 is replaced with centimeter area.
Stop Words Dropping. Stop words appear to be of little value in the proposed matching algorithm. By removing the stop words we can focus on the important words instead. In our business scenario, we built a stop words list for foreign trade e-commerce.
Finally, we get the valid sentences INLINEFORM0 .
Proposed method vs previous methods
The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.
TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.
Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.
GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec.
Proposed method: the detail of our proposed algorithm has been carefully explained in Section SECREF2 . There are several thresholds we need to pick in the experimentation setup. Based on trial and error analysis, we choose 0.75 as the sentence and attribute similarity threshold, which balances the precision and recall relatively well. In our application, due to product enquiry length limitation, customers usually don't refer to more than five attributes in their initial approach to the seller, we choose to keep 5 most important attributes for each category.
Evaluation is conducted by comparing the output of the systems with the manual annotated answers, and we calculate the precision and recall rate. INLINEFORM0 INLINEFORM1
where INLINEFORM0 is the manually labeled attributes , INLINEFORM1 is the detected important attributes.
Table 1 depicts the algorithm performance of each category and the overall average metrics among all categories for our approach and other methods. It can be observed that our proposed method achieves the best performance. The average F1-measure of our approach is 0.47, while the average F1-measure values of “GloVe”, “word2vect” and "TextRank" are 0.46, 0.42 and 0.20 respectively.
Result Analysis
In all our experiments, we find that FastText method outperforms other methods. By analyzing all results, we observe that semantic similarity based methods are more effective than the previous method which we implemented based on TextRank. This conclusion is understandable because lots of enquiries do not simply mention attribute words exactly, but some semantically related words are also used.
Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes.
We also applied the detected attributes in the automatic enquiry generation task and we obtained significantly better generated enquiries compared to previous rigid templates. Due to space limitation, we skip the explanation and leave it for future publications.
Conclusions and Future work
In this paper, we proposed a new general method of identifying important attributes for entities from a knowledge graph. This is a relatively new task and our proposed method of using external textual data and performing semantic matching via word/sub-word embeddings obtained better result compared to other work of using naive string matching and counting. In addition, we also successfully applied the detected important attributes in our real world application of smart composing. In summary, the method is extensible to any knowledge graph without attribute importance information and outperforms previous method.
In future work, there are two major areas with potential of improving the detection accuracy. The first one is about sentence splitting. What we are trying to get is semantic cohesive unit, which can be used to match an attribute, and there might be more comprehensive method than the simple splitting by sentence ending punctuations. The second one is about improving the word embedding quality. We have implemented an in-house improved version of Fasttext, which is adapted to our data source. It is highly possible to use the improved word embedding on purpose of obtaining higher semantic matching precision. As for the application, we will try to use more statistical models in the natural language generation part of the smart composing framework of consuming the detected important attributes. | Unanswerable |
b5c3787ab3784214fc35f230ac4926fe184d86ba | b5c3787ab3784214fc35f230ac4926fe184d86ba_0 | Q: Did they propose other metrics?
Text: Introduction
Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.
In recent years, text classification, a category of Natural Language Processing (NLP) tasks, has drawn much attention BIBREF0, BIBREF1, BIBREF2 for its wide-ranging real-world applications such as fake news detection BIBREF3, document classification BIBREF4, and spoken language understanding (SLU) BIBREF5, BIBREF6, BIBREF7, a core task of conversational assistants like Amazon Alexa or Google Assistant.
However, there are still insufficient characteristic metrics to describe a collection of texts. Unlike numeric or categorical data, simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection.
In this work, we propose a set of characteristic metrics: diversity, density, and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase, sentence, or paragraph. A text collection is first mapped into a high-dimensional embedding space. Our characteristic metrics are then computed to measure the dispersion, sparsity, and uniformity of the distribution. Based on the choice of embedding methods, these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives, for example, lexical diversity, syntactic variation, and semantic homogeneity. Our proposed diversity, density, and homogeneity metrics extract hard-to-visualize quantitative insight for a better understanding and comparison between text collections.
To verify the effectiveness of proposed characteristic metrics, we first conduct a series of simulation experiments that cover various scenarios in two-dimensional as well as high-dimensional vector spaces. The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random down-sampling. Besides, we investigate the relationship between the characteristic metrics and the performance of a renowned model, BERT BIBREF8, on the text classification task using two public benchmark datasets. Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics, which shows the efficacy of our proposed metrics.
Related Work
A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.
More recently, there is a paradigm shift from non-contextualized word embeddings to self-supervised language model (LM) pretraining. Language encoders are pretrained on a large text corpus using a LM-based objective and then re-used for other NLP tasks in a transfer learning manner. These methods can produce contextualized word representations, which have proven to be effective for significantly improving many NLP tasks. Among the most popular approaches are ULMFiT BIBREF2, ELMo BIBREF15, OpenAI GPT BIBREF16, and BERT BIBREF8. In this work, we adopt BERT, a transformer-based technique for NLP pretraining, as the backbone to embed a sentence or a paragraph into a representation vector.
Another stream of related works is the evaluation metrics for cluster analysis. As measuring property or quality of outputs from a clustering algorithm is difficult, human judgment with cluster visualization tools BIBREF17, BIBREF18 are often used. There are unsupervised metrics to measure the quality of a clustering result such as the Calinski-Harabasz score BIBREF19, the Davies-Bouldin index BIBREF20, and the Silhouette coefficients BIBREF21. Complementary to these works that model cross-cluster similarities or relationships, our proposed diversity, density and homogeneity metrics focus on the characteristics of each single cluster, i.e., intra cluster rather than inter cluster relationships.
Proposed Characteristic Metrics
We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.
Our first assumption is, for classification, high-quality training data entail that examples of one class are as differentiable and distinct as possible from another class. From a fine-grained and intra-class perspective, a robust text cluster should be diverse in syntax, which is captured by diversity. And each example should reflect a sufficient signature of the class to which it belongs, that is, each example is representative and contains certain salient features of the class. We define a density metric to account for this aspect. On top of that, examples should also be semantically similar and coherent among each other within a cluster, where homogeneity comes in play.
The more subtle intuition emerges from the inter-class viewpoint. When there are two or more class labels in a text collection, in an ideal scenario, we would expect the homogeneity to be monotonically decreasing. Potentially, the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another. If there is a significant ambiguity between classes, the behavior of the proposed metrics and a possible new metric as a inter-class confusability measurement remain for future work.
In practice, the input is a collection of texts $\lbrace x_1, x_2, ..., x_m\rbrace $, where $x_i$ is a sequence of tokens $x_{i1}, x_{i2}, ..., x_{il}$ denoting a phrase, a sentence, or a paragraph. An embedding method $\mathcal {E}$ then transforms $x_i$ into a vector $\mathcal {E}(x_i)=e_i$ and the characteristic metrics are computed with the embedding vectors. For example,
Note that these embedding vectors often lie in a high-dimensional space, e.g. commonly over 300 dimensions. This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality.
We then assume a set of clusters created over the generated embedding vectors. In classification tasks, the embeddings pertaining to members of a class form a cluster, i.e., in a supervised setting. In an unsupervised setting, we may apply a clustering algorithm to the embeddings. It is worth noting that, in general, the metrics are independent of the assumed underlying grouping method.
Proposed Characteristic Metrics ::: Diversity
Embedding vectors of a given group of texts $\lbrace e_1, ..., e_m\rbrace $ can be treated as a cluster in the high-dimensional embedding space. We propose a diversity metric to estimate the cluster's dispersion or spreadness via a generalized sense of the radius.
Specifically, if a cluster is distributed as a multi-variate Gaussian with a diagonal covariance matrix $\Sigma $, the shape of an isocontour will be an axis-aligned ellipsoid in $\mathbb {R}^{H}$. Such isocontours can be described as:
where $x$ are all possible points in $\mathbb {R}^{H}$ on an isocontour, $c$ is a constant, $\mu $ is a given mean vector with $\mu _j$ being the value along $j$-th axis, and $\sigma ^2_j$ is the variance of the $j$-th axis.
We leverage the geometric interpretation of this formulation and treat the square root of variance, i.e., standard deviation, $\sqrt{\sigma ^2_j}$ as the radius $r_j$ of the ellipsoid along the $j$-th axis. The diversity metric is then defined as the geometric mean of radii across all axes:
where $\sigma _i$ is the standard deviation or square root of the variance along the $i$-th axis.
In practice, to compute a diversity metric, we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values. Note that as the geometric mean acts as a dimensionality normalization, it makes the diversity metric work well in high-dimensional embedding spaces such as BERT.
Proposed Characteristic Metrics ::: Density
Another interesting characteristic is the sparsity of the text embedding cluster. The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space.
Following the assumption mentioned above, a straight-forward definition of the volume can be written as:
up to a constant factor. However, when the dimension goes higher, this formulation easily produces exploding or vanishing density values, i.e., goes to infinity or zero.
To accommodate the impact of high-dimensionality, we impose a dimension normalization. Specifically, we introduce a notion of effective axes, which assumes most variance can be explained or captured in a sub-space of a dimension $\sqrt{H}$. We group all the axes in this sub-space together and compute the geometric mean of their radii as the effective radius. The dimension-normalized volume is then formulated as:
Given a set of embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we define the density metric as:
In practice, the computed density metric values often follow a heavy-tailed distribution, thus sometimes its $\log $ value is reported and denoted as $density (log\-scale)$.
Proposed Characteristic Metrics ::: Homogeneity
The homogeneity metric is proposed to summarize the uniformity of a cluster distribution. That is, how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space. We propose to quantitatively describe homogeneity by building a fully-connected, edge-weighted network, which can be modeled by a Markov chain model. A Markov chain's entropy rate is calculated and normalized to be in $[0, 1]$ range by dividing by the entropy's theoretical upper bound. This output value is defined as the homogeneity metric detailed as follows:
To construct a fully-connected network from the embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we compute their pairwise distances as edge weights, an idea similar to AttriRank BIBREF22. As the Euclidean distance is not a good metric in high-dimensions, we normalize the distance by adding a power $\log (n\_dim)$. We then define a Markov chain model with the weight of $edge(i, j)$ being
and the conditional probability of transition from $i$ to $j$ can be written as
All the transition probabilities $p(i \rightarrow j)$ are from the transition matrix of a Markov chain. An entropy of this Markov chain can be calculated as
where $\nu _i$ is the stationary distribution of the Markov chain. As self-transition probability $p(i \rightarrow i)$ is always zero because of zero distance, there are $(m - 1)$ possible destinations and the entropy's theoretical upper bound becomes
Our proposed homogeneity metric is then normalized into $[0, 1]$ as a uniformity measure:
The intuition is that if some samples are close to each other but far from all the others, the calculated entropy decreases to reflect the unbalanced distribution. In contrast, if each sample can reach other samples within more-or-less the same distances, the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed.
Simulations
To verify that each proposed characteristic metric holds its desirable and intuitive properties, we conduct a series of simulation experiments in 2-dimensional as well as 768-dimensional spaces. The latter has the same dimensionality as the output of our chosen embedding method-BERT, in the following Experiments section.
Simulations ::: Simulation Setup
The base simulation setup is a randomly generated isotropic Gaussian blob that contains $10,000$ data points with the standard deviation along each axis to be $1.0$ and is centered around the origin. All Gaussian blobs are created using make_blobs function in the scikit-learn package.
Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics:
Down-sampling: Down-sample the base cluster to be $\lbrace 90\%, 80\%, ..., 10\%\rbrace $ of its original size. That is, create Gaussian blobs with $\lbrace 9000, ..., 1000\rbrace $ data points;
Varying Spread: Generate Gaussian blobs with standard deviations of each axis to be $\lbrace 2.0, 3.0, ..., 10.0\rbrace $;
Outliers: Add $\lbrace 50, 100, ..., 500\rbrace $ outlier data points, i.e., $\lbrace 0.5\%, ..., 5\%\rbrace $ of the original cluster size, randomly on the surface with a fixed norm or radius;
Multiple Sub-clusters: Along the 1th-axis, with $10,000$ data points in total, create $\lbrace 1, 2, ..., 10\rbrace $ clusters with equal sample sizes but at increasing distance.
For each scenario, we simulate a cluster and compute the characteristic metrics in both 2-dimensional and 768-dimensional spaces. Figure FIGREF17 visualizes each scenario by t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF23. The 768-dimensional simulations are visualized by down-projecting to 50 dimensions via Principal Component Analysis (PCA) followed by t-SNE.
Simulations ::: Simulation Results
Figure FIGREF24 summarizes calculated diversity metrics in the first row, density metrics in the second row, and homogeneity metrics in the third row, for all simulation scenarios.
The diversity metric is robust as its values remain almost the same to the down-sampling of an input cluster. This implies the diversity metric has a desirable property that it is insensitive to the size of inputs. On the other hand, it shows a linear relationship to varying spreads. It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data. With more outliers or more sub-clusters, the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in high-dimensional spaces.
For the density metrics, it exhibits a linear relationship to the size of inputs when down-sampling, which is desired. When increasing spreads, the trend of density metrics corresponds well with human intuition. Note that the density metrics decrease at a much faster rate in higher-dimensional space as log-scale is used in the figure. The density metrics also drop when adding outliers or having multiple distant sub-clusters. This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well. In multiple sub-cluster scenario, the density metric becomes less sensitive in the higher-dimensional space. The reason could be that the sub-clusters are distributed only along one axis and thus have a smaller impact on volume in higher-dimensional spaces.
As random down-sampling or increasing variance of each axis should not affect the uniformity of a cluster distribution, we expect the homogeneity metric remains approximately the same values. And the proposed homogeneity metric indeed demonstrates these ideal properties. Interestingly, for outliers, we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added. This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity.
For multiple sub-clusters, as more sub-clusters are presented, the homogeneity should and does decrease as the data are less and less uniformly distributed in the space.
To sum up, from all simulations, our proposed diversity, density, and homogeneity metrics indeed capture the essence or intuition of dispersion, sparsity, and uniformity in a cluster distribution.
Experiments
The two real-world text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding (SLU).
Experiments ::: Chosen Embedding Method
BERT is a self-supervised language model pretraining approach based on the Transformer BIBREF24, a multi-headed self-attention architecture that can produce different representation vectors for the same token in various sequences, i.e., contextual embeddings.
When pretraining, BERT concatenates two sequences as input, with special tokens $[CLS], [SEP], [EOS]$ denoting the start, separation, and end, respectively. BERT is then pretrained on a large unlabeled corpus with objective-masked language model (MLM), which randomly masks out tokens, and the model predicts the masked tokens. The other classification task is next sentence prediction (NSP). NSP is to predict whether two sequences follow each other in the original text or not.
In this work, we use the pretrained $\text{BERT}_{\text{BASE}}$ which has 12 layers (L), 12 self-attention heads (A), and 768 hidden dimension (H) as the language embedding to compute the proposed data metrics. The off-the-shelf pretrained BERT is obtained from GluonNLP. For each sequence $x_i = (x_{i1}, ..., x_{il})$ with length $l$, BERT takes $[CLS], x_{i1}, ..., x_{il}, [EOS]$ as input and generates embeddings $\lbrace e_{CLS}, e_{i1}, ..., e_{il}, e_{EOS}\rbrace $ at the token level. To obtain the sequence representation, we use a mean pooling over token embeddings:
where $e_i \in \mathbb {R}^{H}$. A text collection $\lbrace x_1, ..., x_m\rbrace $, i.e., a set of token sequences, is then transformed into a group of H-dimensional vectors $\lbrace e_1, ..., e_m\rbrace $.
We compute each metric as described previously, using three BERT layers L1, L6, and L12 as the embedding space, respectively. The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset.
Experiments ::: Experimental Setup
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In both tasks, we used the open-sourced GluonNLP BERT model to perform text classification. For evaluation, sentiment analysis is measured in accuracy, whereas IC and SL are measured in accuracy and F1 score, respectively. BERT is fine-tuned on train/dev sets and evaluated on test sets.
We down-sampled SST-2 and Snips training sets from $100\%$ to $10\%$ with intervals being $10\%$. BERT's performance is reported for each down-sampled setting in Table TABREF29 and Table TABREF30. We used entire test sets for all model evaluations.
To compare, we compute the proposed data metrics, i.e., diversity, density, and homogeneity, on the original and the down-sampled training sets.
Experiments ::: Experimental Results
We will discuss the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments on the two public benchmark datasets, in the following subsections:
Experiments ::: Experimental Results ::: SST-2
In Table TABREF29, the sentiment classification accuracy is $92.66\%$ without down-sampling, which is consistent with the reported GluonNLP BERT model performance on SST-2. It also indicates SST-2 training data are differentiable between label classes, i.e., from the positive class to the negative class, which satisfies our assumption for the characteristic metrics.
Decreasing the training set size does not reduce performance until it is randomly down-sampled to only $20\%$ of the original size. Meanwhile, density and homogeneity metrics also decrease significantly (highlighted in bold in Table TABREF29), implying a clear relationship between these metrics and model performance.
Experiments ::: Experimental Results ::: Snips
In Table TABREF30, the Snips dataset seems to be distinct between IC/SL classes since the IC accurcy and SL F1 are as high as $98.71\%$ and $96.06\%$ without down-sampling, respectively. Similar to SST-2, this implies that Snips training data should also support the inter-class differentiability assumption for our proposed characteristic metrics.
IC accuracy on Snips remains higher than $98\%$ until we down-sample the training set to $20\%$ of the original size. In contrast, SL F1 score is more sensitive to the down-sampling of the training set, as it starts decreasing when down-sampling. When the training set is only $10\%$ left, SL F1 score drops to $87.20\%$.
The diversity metric does not decrease immediately until the training set equals to or is less than $40\%$ of the original set. This implies that random sampling does not impact the diversity, if the sampling rate is greater than $40\%$. The training set is very likely to contain redundant information in terms of text diversity. This is supported by what we observed as model has consistently high IC/SL performances between $40\%$-$100\%$ down-sampling ratios.
Moreover, the biggest drop of density and homogeneity (highlighted in bold in Table TABREF30) highly correlates with the biggest IC/SL drop, at the point the training set size is reduced from $20\%$ to $10\%$. This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets.
Analysis
We calculate and show in Table TABREF35 the Pearson's correlations between the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments in Table TABREF29 and Table TABREF30. Correlations higher than $0.5$ are highlighted in bold. As mentioned before, model performance is highly correlated with density and homogeneity, both are computed on the train set. Diversity is only correlated with Snips SL F1 score at a moderate level.
These are consistent with our simulation results, which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space. However, the simultaneous huge drops of model performance, density, and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when down-sampling. Moreover, results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well.
Conclusions
In this work, we proposed several characteristic metrics to describe the diversity, density, and homogeneity of text collections without using any labels. Pre-trained language embeddings are used to efficiently characterize text datasets. Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks. We would like to apply the diversity, density, and homogeneity metrics for text data augmentation and selection in a semi-supervised manner as our future work. | Yes |
9174aded45bc36915f2e2adb6f352f3c7d9ada8b | 9174aded45bc36915f2e2adb6f352f3c7d9ada8b_0 | Q: Which real-world datasets did they use?
Text: Introduction
Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.
In recent years, text classification, a category of Natural Language Processing (NLP) tasks, has drawn much attention BIBREF0, BIBREF1, BIBREF2 for its wide-ranging real-world applications such as fake news detection BIBREF3, document classification BIBREF4, and spoken language understanding (SLU) BIBREF5, BIBREF6, BIBREF7, a core task of conversational assistants like Amazon Alexa or Google Assistant.
However, there are still insufficient characteristic metrics to describe a collection of texts. Unlike numeric or categorical data, simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection.
In this work, we propose a set of characteristic metrics: diversity, density, and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase, sentence, or paragraph. A text collection is first mapped into a high-dimensional embedding space. Our characteristic metrics are then computed to measure the dispersion, sparsity, and uniformity of the distribution. Based on the choice of embedding methods, these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives, for example, lexical diversity, syntactic variation, and semantic homogeneity. Our proposed diversity, density, and homogeneity metrics extract hard-to-visualize quantitative insight for a better understanding and comparison between text collections.
To verify the effectiveness of proposed characteristic metrics, we first conduct a series of simulation experiments that cover various scenarios in two-dimensional as well as high-dimensional vector spaces. The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random down-sampling. Besides, we investigate the relationship between the characteristic metrics and the performance of a renowned model, BERT BIBREF8, on the text classification task using two public benchmark datasets. Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics, which shows the efficacy of our proposed metrics.
Related Work
A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.
More recently, there is a paradigm shift from non-contextualized word embeddings to self-supervised language model (LM) pretraining. Language encoders are pretrained on a large text corpus using a LM-based objective and then re-used for other NLP tasks in a transfer learning manner. These methods can produce contextualized word representations, which have proven to be effective for significantly improving many NLP tasks. Among the most popular approaches are ULMFiT BIBREF2, ELMo BIBREF15, OpenAI GPT BIBREF16, and BERT BIBREF8. In this work, we adopt BERT, a transformer-based technique for NLP pretraining, as the backbone to embed a sentence or a paragraph into a representation vector.
Another stream of related works is the evaluation metrics for cluster analysis. As measuring property or quality of outputs from a clustering algorithm is difficult, human judgment with cluster visualization tools BIBREF17, BIBREF18 are often used. There are unsupervised metrics to measure the quality of a clustering result such as the Calinski-Harabasz score BIBREF19, the Davies-Bouldin index BIBREF20, and the Silhouette coefficients BIBREF21. Complementary to these works that model cross-cluster similarities or relationships, our proposed diversity, density and homogeneity metrics focus on the characteristics of each single cluster, i.e., intra cluster rather than inter cluster relationships.
Proposed Characteristic Metrics
We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.
Our first assumption is, for classification, high-quality training data entail that examples of one class are as differentiable and distinct as possible from another class. From a fine-grained and intra-class perspective, a robust text cluster should be diverse in syntax, which is captured by diversity. And each example should reflect a sufficient signature of the class to which it belongs, that is, each example is representative and contains certain salient features of the class. We define a density metric to account for this aspect. On top of that, examples should also be semantically similar and coherent among each other within a cluster, where homogeneity comes in play.
The more subtle intuition emerges from the inter-class viewpoint. When there are two or more class labels in a text collection, in an ideal scenario, we would expect the homogeneity to be monotonically decreasing. Potentially, the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another. If there is a significant ambiguity between classes, the behavior of the proposed metrics and a possible new metric as a inter-class confusability measurement remain for future work.
In practice, the input is a collection of texts $\lbrace x_1, x_2, ..., x_m\rbrace $, where $x_i$ is a sequence of tokens $x_{i1}, x_{i2}, ..., x_{il}$ denoting a phrase, a sentence, or a paragraph. An embedding method $\mathcal {E}$ then transforms $x_i$ into a vector $\mathcal {E}(x_i)=e_i$ and the characteristic metrics are computed with the embedding vectors. For example,
Note that these embedding vectors often lie in a high-dimensional space, e.g. commonly over 300 dimensions. This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality.
We then assume a set of clusters created over the generated embedding vectors. In classification tasks, the embeddings pertaining to members of a class form a cluster, i.e., in a supervised setting. In an unsupervised setting, we may apply a clustering algorithm to the embeddings. It is worth noting that, in general, the metrics are independent of the assumed underlying grouping method.
Proposed Characteristic Metrics ::: Diversity
Embedding vectors of a given group of texts $\lbrace e_1, ..., e_m\rbrace $ can be treated as a cluster in the high-dimensional embedding space. We propose a diversity metric to estimate the cluster's dispersion or spreadness via a generalized sense of the radius.
Specifically, if a cluster is distributed as a multi-variate Gaussian with a diagonal covariance matrix $\Sigma $, the shape of an isocontour will be an axis-aligned ellipsoid in $\mathbb {R}^{H}$. Such isocontours can be described as:
where $x$ are all possible points in $\mathbb {R}^{H}$ on an isocontour, $c$ is a constant, $\mu $ is a given mean vector with $\mu _j$ being the value along $j$-th axis, and $\sigma ^2_j$ is the variance of the $j$-th axis.
We leverage the geometric interpretation of this formulation and treat the square root of variance, i.e., standard deviation, $\sqrt{\sigma ^2_j}$ as the radius $r_j$ of the ellipsoid along the $j$-th axis. The diversity metric is then defined as the geometric mean of radii across all axes:
where $\sigma _i$ is the standard deviation or square root of the variance along the $i$-th axis.
In practice, to compute a diversity metric, we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values. Note that as the geometric mean acts as a dimensionality normalization, it makes the diversity metric work well in high-dimensional embedding spaces such as BERT.
Proposed Characteristic Metrics ::: Density
Another interesting characteristic is the sparsity of the text embedding cluster. The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space.
Following the assumption mentioned above, a straight-forward definition of the volume can be written as:
up to a constant factor. However, when the dimension goes higher, this formulation easily produces exploding or vanishing density values, i.e., goes to infinity or zero.
To accommodate the impact of high-dimensionality, we impose a dimension normalization. Specifically, we introduce a notion of effective axes, which assumes most variance can be explained or captured in a sub-space of a dimension $\sqrt{H}$. We group all the axes in this sub-space together and compute the geometric mean of their radii as the effective radius. The dimension-normalized volume is then formulated as:
Given a set of embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we define the density metric as:
In practice, the computed density metric values often follow a heavy-tailed distribution, thus sometimes its $\log $ value is reported and denoted as $density (log\-scale)$.
Proposed Characteristic Metrics ::: Homogeneity
The homogeneity metric is proposed to summarize the uniformity of a cluster distribution. That is, how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space. We propose to quantitatively describe homogeneity by building a fully-connected, edge-weighted network, which can be modeled by a Markov chain model. A Markov chain's entropy rate is calculated and normalized to be in $[0, 1]$ range by dividing by the entropy's theoretical upper bound. This output value is defined as the homogeneity metric detailed as follows:
To construct a fully-connected network from the embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we compute their pairwise distances as edge weights, an idea similar to AttriRank BIBREF22. As the Euclidean distance is not a good metric in high-dimensions, we normalize the distance by adding a power $\log (n\_dim)$. We then define a Markov chain model with the weight of $edge(i, j)$ being
and the conditional probability of transition from $i$ to $j$ can be written as
All the transition probabilities $p(i \rightarrow j)$ are from the transition matrix of a Markov chain. An entropy of this Markov chain can be calculated as
where $\nu _i$ is the stationary distribution of the Markov chain. As self-transition probability $p(i \rightarrow i)$ is always zero because of zero distance, there are $(m - 1)$ possible destinations and the entropy's theoretical upper bound becomes
Our proposed homogeneity metric is then normalized into $[0, 1]$ as a uniformity measure:
The intuition is that if some samples are close to each other but far from all the others, the calculated entropy decreases to reflect the unbalanced distribution. In contrast, if each sample can reach other samples within more-or-less the same distances, the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed.
Simulations
To verify that each proposed characteristic metric holds its desirable and intuitive properties, we conduct a series of simulation experiments in 2-dimensional as well as 768-dimensional spaces. The latter has the same dimensionality as the output of our chosen embedding method-BERT, in the following Experiments section.
Simulations ::: Simulation Setup
The base simulation setup is a randomly generated isotropic Gaussian blob that contains $10,000$ data points with the standard deviation along each axis to be $1.0$ and is centered around the origin. All Gaussian blobs are created using make_blobs function in the scikit-learn package.
Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics:
Down-sampling: Down-sample the base cluster to be $\lbrace 90\%, 80\%, ..., 10\%\rbrace $ of its original size. That is, create Gaussian blobs with $\lbrace 9000, ..., 1000\rbrace $ data points;
Varying Spread: Generate Gaussian blobs with standard deviations of each axis to be $\lbrace 2.0, 3.0, ..., 10.0\rbrace $;
Outliers: Add $\lbrace 50, 100, ..., 500\rbrace $ outlier data points, i.e., $\lbrace 0.5\%, ..., 5\%\rbrace $ of the original cluster size, randomly on the surface with a fixed norm or radius;
Multiple Sub-clusters: Along the 1th-axis, with $10,000$ data points in total, create $\lbrace 1, 2, ..., 10\rbrace $ clusters with equal sample sizes but at increasing distance.
For each scenario, we simulate a cluster and compute the characteristic metrics in both 2-dimensional and 768-dimensional spaces. Figure FIGREF17 visualizes each scenario by t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF23. The 768-dimensional simulations are visualized by down-projecting to 50 dimensions via Principal Component Analysis (PCA) followed by t-SNE.
Simulations ::: Simulation Results
Figure FIGREF24 summarizes calculated diversity metrics in the first row, density metrics in the second row, and homogeneity metrics in the third row, for all simulation scenarios.
The diversity metric is robust as its values remain almost the same to the down-sampling of an input cluster. This implies the diversity metric has a desirable property that it is insensitive to the size of inputs. On the other hand, it shows a linear relationship to varying spreads. It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data. With more outliers or more sub-clusters, the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in high-dimensional spaces.
For the density metrics, it exhibits a linear relationship to the size of inputs when down-sampling, which is desired. When increasing spreads, the trend of density metrics corresponds well with human intuition. Note that the density metrics decrease at a much faster rate in higher-dimensional space as log-scale is used in the figure. The density metrics also drop when adding outliers or having multiple distant sub-clusters. This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well. In multiple sub-cluster scenario, the density metric becomes less sensitive in the higher-dimensional space. The reason could be that the sub-clusters are distributed only along one axis and thus have a smaller impact on volume in higher-dimensional spaces.
As random down-sampling or increasing variance of each axis should not affect the uniformity of a cluster distribution, we expect the homogeneity metric remains approximately the same values. And the proposed homogeneity metric indeed demonstrates these ideal properties. Interestingly, for outliers, we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added. This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity.
For multiple sub-clusters, as more sub-clusters are presented, the homogeneity should and does decrease as the data are less and less uniformly distributed in the space.
To sum up, from all simulations, our proposed diversity, density, and homogeneity metrics indeed capture the essence or intuition of dispersion, sparsity, and uniformity in a cluster distribution.
Experiments
The two real-world text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding (SLU).
Experiments ::: Chosen Embedding Method
BERT is a self-supervised language model pretraining approach based on the Transformer BIBREF24, a multi-headed self-attention architecture that can produce different representation vectors for the same token in various sequences, i.e., contextual embeddings.
When pretraining, BERT concatenates two sequences as input, with special tokens $[CLS], [SEP], [EOS]$ denoting the start, separation, and end, respectively. BERT is then pretrained on a large unlabeled corpus with objective-masked language model (MLM), which randomly masks out tokens, and the model predicts the masked tokens. The other classification task is next sentence prediction (NSP). NSP is to predict whether two sequences follow each other in the original text or not.
In this work, we use the pretrained $\text{BERT}_{\text{BASE}}$ which has 12 layers (L), 12 self-attention heads (A), and 768 hidden dimension (H) as the language embedding to compute the proposed data metrics. The off-the-shelf pretrained BERT is obtained from GluonNLP. For each sequence $x_i = (x_{i1}, ..., x_{il})$ with length $l$, BERT takes $[CLS], x_{i1}, ..., x_{il}, [EOS]$ as input and generates embeddings $\lbrace e_{CLS}, e_{i1}, ..., e_{il}, e_{EOS}\rbrace $ at the token level. To obtain the sequence representation, we use a mean pooling over token embeddings:
where $e_i \in \mathbb {R}^{H}$. A text collection $\lbrace x_1, ..., x_m\rbrace $, i.e., a set of token sequences, is then transformed into a group of H-dimensional vectors $\lbrace e_1, ..., e_m\rbrace $.
We compute each metric as described previously, using three BERT layers L1, L6, and L12 as the embedding space, respectively. The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset.
Experiments ::: Experimental Setup
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In both tasks, we used the open-sourced GluonNLP BERT model to perform text classification. For evaluation, sentiment analysis is measured in accuracy, whereas IC and SL are measured in accuracy and F1 score, respectively. BERT is fine-tuned on train/dev sets and evaluated on test sets.
We down-sampled SST-2 and Snips training sets from $100\%$ to $10\%$ with intervals being $10\%$. BERT's performance is reported for each down-sampled setting in Table TABREF29 and Table TABREF30. We used entire test sets for all model evaluations.
To compare, we compute the proposed data metrics, i.e., diversity, density, and homogeneity, on the original and the down-sampled training sets.
Experiments ::: Experimental Results
We will discuss the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments on the two public benchmark datasets, in the following subsections:
Experiments ::: Experimental Results ::: SST-2
In Table TABREF29, the sentiment classification accuracy is $92.66\%$ without down-sampling, which is consistent with the reported GluonNLP BERT model performance on SST-2. It also indicates SST-2 training data are differentiable between label classes, i.e., from the positive class to the negative class, which satisfies our assumption for the characteristic metrics.
Decreasing the training set size does not reduce performance until it is randomly down-sampled to only $20\%$ of the original size. Meanwhile, density and homogeneity metrics also decrease significantly (highlighted in bold in Table TABREF29), implying a clear relationship between these metrics and model performance.
Experiments ::: Experimental Results ::: Snips
In Table TABREF30, the Snips dataset seems to be distinct between IC/SL classes since the IC accurcy and SL F1 are as high as $98.71\%$ and $96.06\%$ without down-sampling, respectively. Similar to SST-2, this implies that Snips training data should also support the inter-class differentiability assumption for our proposed characteristic metrics.
IC accuracy on Snips remains higher than $98\%$ until we down-sample the training set to $20\%$ of the original size. In contrast, SL F1 score is more sensitive to the down-sampling of the training set, as it starts decreasing when down-sampling. When the training set is only $10\%$ left, SL F1 score drops to $87.20\%$.
The diversity metric does not decrease immediately until the training set equals to or is less than $40\%$ of the original set. This implies that random sampling does not impact the diversity, if the sampling rate is greater than $40\%$. The training set is very likely to contain redundant information in terms of text diversity. This is supported by what we observed as model has consistently high IC/SL performances between $40\%$-$100\%$ down-sampling ratios.
Moreover, the biggest drop of density and homogeneity (highlighted in bold in Table TABREF30) highly correlates with the biggest IC/SL drop, at the point the training set size is reduced from $20\%$ to $10\%$. This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets.
Analysis
We calculate and show in Table TABREF35 the Pearson's correlations between the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments in Table TABREF29 and Table TABREF30. Correlations higher than $0.5$ are highlighted in bold. As mentioned before, model performance is highly correlated with density and homogeneity, both are computed on the train set. Diversity is only correlated with Snips SL F1 score at a moderate level.
These are consistent with our simulation results, which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space. However, the simultaneous huge drops of model performance, density, and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when down-sampling. Moreover, results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well.
Conclusions
In this work, we proposed several characteristic metrics to describe the diversity, density, and homogeneity of text collections without using any labels. Pre-trained language embeddings are used to efficiently characterize text datasets. Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks. We would like to apply the diversity, density, and homogeneity metrics for text data augmentation and selection in a semi-supervised manner as our future work. | SST-2 (Stanford Sentiment Treebank, version 2), Snips |
9174aded45bc36915f2e2adb6f352f3c7d9ada8b | 9174aded45bc36915f2e2adb6f352f3c7d9ada8b_1 | Q: Which real-world datasets did they use?
Text: Introduction
Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.
In recent years, text classification, a category of Natural Language Processing (NLP) tasks, has drawn much attention BIBREF0, BIBREF1, BIBREF2 for its wide-ranging real-world applications such as fake news detection BIBREF3, document classification BIBREF4, and spoken language understanding (SLU) BIBREF5, BIBREF6, BIBREF7, a core task of conversational assistants like Amazon Alexa or Google Assistant.
However, there are still insufficient characteristic metrics to describe a collection of texts. Unlike numeric or categorical data, simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection.
In this work, we propose a set of characteristic metrics: diversity, density, and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase, sentence, or paragraph. A text collection is first mapped into a high-dimensional embedding space. Our characteristic metrics are then computed to measure the dispersion, sparsity, and uniformity of the distribution. Based on the choice of embedding methods, these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives, for example, lexical diversity, syntactic variation, and semantic homogeneity. Our proposed diversity, density, and homogeneity metrics extract hard-to-visualize quantitative insight for a better understanding and comparison between text collections.
To verify the effectiveness of proposed characteristic metrics, we first conduct a series of simulation experiments that cover various scenarios in two-dimensional as well as high-dimensional vector spaces. The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random down-sampling. Besides, we investigate the relationship between the characteristic metrics and the performance of a renowned model, BERT BIBREF8, on the text classification task using two public benchmark datasets. Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics, which shows the efficacy of our proposed metrics.
Related Work
A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.
More recently, there is a paradigm shift from non-contextualized word embeddings to self-supervised language model (LM) pretraining. Language encoders are pretrained on a large text corpus using a LM-based objective and then re-used for other NLP tasks in a transfer learning manner. These methods can produce contextualized word representations, which have proven to be effective for significantly improving many NLP tasks. Among the most popular approaches are ULMFiT BIBREF2, ELMo BIBREF15, OpenAI GPT BIBREF16, and BERT BIBREF8. In this work, we adopt BERT, a transformer-based technique for NLP pretraining, as the backbone to embed a sentence or a paragraph into a representation vector.
Another stream of related works is the evaluation metrics for cluster analysis. As measuring property or quality of outputs from a clustering algorithm is difficult, human judgment with cluster visualization tools BIBREF17, BIBREF18 are often used. There are unsupervised metrics to measure the quality of a clustering result such as the Calinski-Harabasz score BIBREF19, the Davies-Bouldin index BIBREF20, and the Silhouette coefficients BIBREF21. Complementary to these works that model cross-cluster similarities or relationships, our proposed diversity, density and homogeneity metrics focus on the characteristics of each single cluster, i.e., intra cluster rather than inter cluster relationships.
Proposed Characteristic Metrics
We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.
Our first assumption is, for classification, high-quality training data entail that examples of one class are as differentiable and distinct as possible from another class. From a fine-grained and intra-class perspective, a robust text cluster should be diverse in syntax, which is captured by diversity. And each example should reflect a sufficient signature of the class to which it belongs, that is, each example is representative and contains certain salient features of the class. We define a density metric to account for this aspect. On top of that, examples should also be semantically similar and coherent among each other within a cluster, where homogeneity comes in play.
The more subtle intuition emerges from the inter-class viewpoint. When there are two or more class labels in a text collection, in an ideal scenario, we would expect the homogeneity to be monotonically decreasing. Potentially, the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another. If there is a significant ambiguity between classes, the behavior of the proposed metrics and a possible new metric as a inter-class confusability measurement remain for future work.
In practice, the input is a collection of texts $\lbrace x_1, x_2, ..., x_m\rbrace $, where $x_i$ is a sequence of tokens $x_{i1}, x_{i2}, ..., x_{il}$ denoting a phrase, a sentence, or a paragraph. An embedding method $\mathcal {E}$ then transforms $x_i$ into a vector $\mathcal {E}(x_i)=e_i$ and the characteristic metrics are computed with the embedding vectors. For example,
Note that these embedding vectors often lie in a high-dimensional space, e.g. commonly over 300 dimensions. This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality.
We then assume a set of clusters created over the generated embedding vectors. In classification tasks, the embeddings pertaining to members of a class form a cluster, i.e., in a supervised setting. In an unsupervised setting, we may apply a clustering algorithm to the embeddings. It is worth noting that, in general, the metrics are independent of the assumed underlying grouping method.
Proposed Characteristic Metrics ::: Diversity
Embedding vectors of a given group of texts $\lbrace e_1, ..., e_m\rbrace $ can be treated as a cluster in the high-dimensional embedding space. We propose a diversity metric to estimate the cluster's dispersion or spreadness via a generalized sense of the radius.
Specifically, if a cluster is distributed as a multi-variate Gaussian with a diagonal covariance matrix $\Sigma $, the shape of an isocontour will be an axis-aligned ellipsoid in $\mathbb {R}^{H}$. Such isocontours can be described as:
where $x$ are all possible points in $\mathbb {R}^{H}$ on an isocontour, $c$ is a constant, $\mu $ is a given mean vector with $\mu _j$ being the value along $j$-th axis, and $\sigma ^2_j$ is the variance of the $j$-th axis.
We leverage the geometric interpretation of this formulation and treat the square root of variance, i.e., standard deviation, $\sqrt{\sigma ^2_j}$ as the radius $r_j$ of the ellipsoid along the $j$-th axis. The diversity metric is then defined as the geometric mean of radii across all axes:
where $\sigma _i$ is the standard deviation or square root of the variance along the $i$-th axis.
In practice, to compute a diversity metric, we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values. Note that as the geometric mean acts as a dimensionality normalization, it makes the diversity metric work well in high-dimensional embedding spaces such as BERT.
Proposed Characteristic Metrics ::: Density
Another interesting characteristic is the sparsity of the text embedding cluster. The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space.
Following the assumption mentioned above, a straight-forward definition of the volume can be written as:
up to a constant factor. However, when the dimension goes higher, this formulation easily produces exploding or vanishing density values, i.e., goes to infinity or zero.
To accommodate the impact of high-dimensionality, we impose a dimension normalization. Specifically, we introduce a notion of effective axes, which assumes most variance can be explained or captured in a sub-space of a dimension $\sqrt{H}$. We group all the axes in this sub-space together and compute the geometric mean of their radii as the effective radius. The dimension-normalized volume is then formulated as:
Given a set of embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we define the density metric as:
In practice, the computed density metric values often follow a heavy-tailed distribution, thus sometimes its $\log $ value is reported and denoted as $density (log\-scale)$.
Proposed Characteristic Metrics ::: Homogeneity
The homogeneity metric is proposed to summarize the uniformity of a cluster distribution. That is, how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space. We propose to quantitatively describe homogeneity by building a fully-connected, edge-weighted network, which can be modeled by a Markov chain model. A Markov chain's entropy rate is calculated and normalized to be in $[0, 1]$ range by dividing by the entropy's theoretical upper bound. This output value is defined as the homogeneity metric detailed as follows:
To construct a fully-connected network from the embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we compute their pairwise distances as edge weights, an idea similar to AttriRank BIBREF22. As the Euclidean distance is not a good metric in high-dimensions, we normalize the distance by adding a power $\log (n\_dim)$. We then define a Markov chain model with the weight of $edge(i, j)$ being
and the conditional probability of transition from $i$ to $j$ can be written as
All the transition probabilities $p(i \rightarrow j)$ are from the transition matrix of a Markov chain. An entropy of this Markov chain can be calculated as
where $\nu _i$ is the stationary distribution of the Markov chain. As self-transition probability $p(i \rightarrow i)$ is always zero because of zero distance, there are $(m - 1)$ possible destinations and the entropy's theoretical upper bound becomes
Our proposed homogeneity metric is then normalized into $[0, 1]$ as a uniformity measure:
The intuition is that if some samples are close to each other but far from all the others, the calculated entropy decreases to reflect the unbalanced distribution. In contrast, if each sample can reach other samples within more-or-less the same distances, the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed.
Simulations
To verify that each proposed characteristic metric holds its desirable and intuitive properties, we conduct a series of simulation experiments in 2-dimensional as well as 768-dimensional spaces. The latter has the same dimensionality as the output of our chosen embedding method-BERT, in the following Experiments section.
Simulations ::: Simulation Setup
The base simulation setup is a randomly generated isotropic Gaussian blob that contains $10,000$ data points with the standard deviation along each axis to be $1.0$ and is centered around the origin. All Gaussian blobs are created using make_blobs function in the scikit-learn package.
Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics:
Down-sampling: Down-sample the base cluster to be $\lbrace 90\%, 80\%, ..., 10\%\rbrace $ of its original size. That is, create Gaussian blobs with $\lbrace 9000, ..., 1000\rbrace $ data points;
Varying Spread: Generate Gaussian blobs with standard deviations of each axis to be $\lbrace 2.0, 3.0, ..., 10.0\rbrace $;
Outliers: Add $\lbrace 50, 100, ..., 500\rbrace $ outlier data points, i.e., $\lbrace 0.5\%, ..., 5\%\rbrace $ of the original cluster size, randomly on the surface with a fixed norm or radius;
Multiple Sub-clusters: Along the 1th-axis, with $10,000$ data points in total, create $\lbrace 1, 2, ..., 10\rbrace $ clusters with equal sample sizes but at increasing distance.
For each scenario, we simulate a cluster and compute the characteristic metrics in both 2-dimensional and 768-dimensional spaces. Figure FIGREF17 visualizes each scenario by t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF23. The 768-dimensional simulations are visualized by down-projecting to 50 dimensions via Principal Component Analysis (PCA) followed by t-SNE.
Simulations ::: Simulation Results
Figure FIGREF24 summarizes calculated diversity metrics in the first row, density metrics in the second row, and homogeneity metrics in the third row, for all simulation scenarios.
The diversity metric is robust as its values remain almost the same to the down-sampling of an input cluster. This implies the diversity metric has a desirable property that it is insensitive to the size of inputs. On the other hand, it shows a linear relationship to varying spreads. It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data. With more outliers or more sub-clusters, the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in high-dimensional spaces.
For the density metrics, it exhibits a linear relationship to the size of inputs when down-sampling, which is desired. When increasing spreads, the trend of density metrics corresponds well with human intuition. Note that the density metrics decrease at a much faster rate in higher-dimensional space as log-scale is used in the figure. The density metrics also drop when adding outliers or having multiple distant sub-clusters. This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well. In multiple sub-cluster scenario, the density metric becomes less sensitive in the higher-dimensional space. The reason could be that the sub-clusters are distributed only along one axis and thus have a smaller impact on volume in higher-dimensional spaces.
As random down-sampling or increasing variance of each axis should not affect the uniformity of a cluster distribution, we expect the homogeneity metric remains approximately the same values. And the proposed homogeneity metric indeed demonstrates these ideal properties. Interestingly, for outliers, we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added. This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity.
For multiple sub-clusters, as more sub-clusters are presented, the homogeneity should and does decrease as the data are less and less uniformly distributed in the space.
To sum up, from all simulations, our proposed diversity, density, and homogeneity metrics indeed capture the essence or intuition of dispersion, sparsity, and uniformity in a cluster distribution.
Experiments
The two real-world text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding (SLU).
Experiments ::: Chosen Embedding Method
BERT is a self-supervised language model pretraining approach based on the Transformer BIBREF24, a multi-headed self-attention architecture that can produce different representation vectors for the same token in various sequences, i.e., contextual embeddings.
When pretraining, BERT concatenates two sequences as input, with special tokens $[CLS], [SEP], [EOS]$ denoting the start, separation, and end, respectively. BERT is then pretrained on a large unlabeled corpus with objective-masked language model (MLM), which randomly masks out tokens, and the model predicts the masked tokens. The other classification task is next sentence prediction (NSP). NSP is to predict whether two sequences follow each other in the original text or not.
In this work, we use the pretrained $\text{BERT}_{\text{BASE}}$ which has 12 layers (L), 12 self-attention heads (A), and 768 hidden dimension (H) as the language embedding to compute the proposed data metrics. The off-the-shelf pretrained BERT is obtained from GluonNLP. For each sequence $x_i = (x_{i1}, ..., x_{il})$ with length $l$, BERT takes $[CLS], x_{i1}, ..., x_{il}, [EOS]$ as input and generates embeddings $\lbrace e_{CLS}, e_{i1}, ..., e_{il}, e_{EOS}\rbrace $ at the token level. To obtain the sequence representation, we use a mean pooling over token embeddings:
where $e_i \in \mathbb {R}^{H}$. A text collection $\lbrace x_1, ..., x_m\rbrace $, i.e., a set of token sequences, is then transformed into a group of H-dimensional vectors $\lbrace e_1, ..., e_m\rbrace $.
We compute each metric as described previously, using three BERT layers L1, L6, and L12 as the embedding space, respectively. The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset.
Experiments ::: Experimental Setup
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In both tasks, we used the open-sourced GluonNLP BERT model to perform text classification. For evaluation, sentiment analysis is measured in accuracy, whereas IC and SL are measured in accuracy and F1 score, respectively. BERT is fine-tuned on train/dev sets and evaluated on test sets.
We down-sampled SST-2 and Snips training sets from $100\%$ to $10\%$ with intervals being $10\%$. BERT's performance is reported for each down-sampled setting in Table TABREF29 and Table TABREF30. We used entire test sets for all model evaluations.
To compare, we compute the proposed data metrics, i.e., diversity, density, and homogeneity, on the original and the down-sampled training sets.
Experiments ::: Experimental Results
We will discuss the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments on the two public benchmark datasets, in the following subsections:
Experiments ::: Experimental Results ::: SST-2
In Table TABREF29, the sentiment classification accuracy is $92.66\%$ without down-sampling, which is consistent with the reported GluonNLP BERT model performance on SST-2. It also indicates SST-2 training data are differentiable between label classes, i.e., from the positive class to the negative class, which satisfies our assumption for the characteristic metrics.
Decreasing the training set size does not reduce performance until it is randomly down-sampled to only $20\%$ of the original size. Meanwhile, density and homogeneity metrics also decrease significantly (highlighted in bold in Table TABREF29), implying a clear relationship between these metrics and model performance.
Experiments ::: Experimental Results ::: Snips
In Table TABREF30, the Snips dataset seems to be distinct between IC/SL classes since the IC accurcy and SL F1 are as high as $98.71\%$ and $96.06\%$ without down-sampling, respectively. Similar to SST-2, this implies that Snips training data should also support the inter-class differentiability assumption for our proposed characteristic metrics.
IC accuracy on Snips remains higher than $98\%$ until we down-sample the training set to $20\%$ of the original size. In contrast, SL F1 score is more sensitive to the down-sampling of the training set, as it starts decreasing when down-sampling. When the training set is only $10\%$ left, SL F1 score drops to $87.20\%$.
The diversity metric does not decrease immediately until the training set equals to or is less than $40\%$ of the original set. This implies that random sampling does not impact the diversity, if the sampling rate is greater than $40\%$. The training set is very likely to contain redundant information in terms of text diversity. This is supported by what we observed as model has consistently high IC/SL performances between $40\%$-$100\%$ down-sampling ratios.
Moreover, the biggest drop of density and homogeneity (highlighted in bold in Table TABREF30) highly correlates with the biggest IC/SL drop, at the point the training set size is reduced from $20\%$ to $10\%$. This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets.
Analysis
We calculate and show in Table TABREF35 the Pearson's correlations between the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments in Table TABREF29 and Table TABREF30. Correlations higher than $0.5$ are highlighted in bold. As mentioned before, model performance is highly correlated with density and homogeneity, both are computed on the train set. Diversity is only correlated with Snips SL F1 score at a moderate level.
These are consistent with our simulation results, which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space. However, the simultaneous huge drops of model performance, density, and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when down-sampling. Moreover, results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well.
Conclusions
In this work, we proposed several characteristic metrics to describe the diversity, density, and homogeneity of text collections without using any labels. Pre-trained language embeddings are used to efficiently characterize text datasets. Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks. We would like to apply the diversity, density, and homogeneity metrics for text data augmentation and selection in a semi-supervised manner as our future work. | SST-2, Snips |
a8f1029f6766bffee38a627477f61457b2d6ed5c | a8f1029f6766bffee38a627477f61457b2d6ed5c_0 | Q: How did they obtain human intuitions?
Text: Introduction
Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.
In recent years, text classification, a category of Natural Language Processing (NLP) tasks, has drawn much attention BIBREF0, BIBREF1, BIBREF2 for its wide-ranging real-world applications such as fake news detection BIBREF3, document classification BIBREF4, and spoken language understanding (SLU) BIBREF5, BIBREF6, BIBREF7, a core task of conversational assistants like Amazon Alexa or Google Assistant.
However, there are still insufficient characteristic metrics to describe a collection of texts. Unlike numeric or categorical data, simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection.
In this work, we propose a set of characteristic metrics: diversity, density, and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase, sentence, or paragraph. A text collection is first mapped into a high-dimensional embedding space. Our characteristic metrics are then computed to measure the dispersion, sparsity, and uniformity of the distribution. Based on the choice of embedding methods, these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives, for example, lexical diversity, syntactic variation, and semantic homogeneity. Our proposed diversity, density, and homogeneity metrics extract hard-to-visualize quantitative insight for a better understanding and comparison between text collections.
To verify the effectiveness of proposed characteristic metrics, we first conduct a series of simulation experiments that cover various scenarios in two-dimensional as well as high-dimensional vector spaces. The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random down-sampling. Besides, we investigate the relationship between the characteristic metrics and the performance of a renowned model, BERT BIBREF8, on the text classification task using two public benchmark datasets. Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics, which shows the efficacy of our proposed metrics.
Related Work
A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.
More recently, there is a paradigm shift from non-contextualized word embeddings to self-supervised language model (LM) pretraining. Language encoders are pretrained on a large text corpus using a LM-based objective and then re-used for other NLP tasks in a transfer learning manner. These methods can produce contextualized word representations, which have proven to be effective for significantly improving many NLP tasks. Among the most popular approaches are ULMFiT BIBREF2, ELMo BIBREF15, OpenAI GPT BIBREF16, and BERT BIBREF8. In this work, we adopt BERT, a transformer-based technique for NLP pretraining, as the backbone to embed a sentence or a paragraph into a representation vector.
Another stream of related works is the evaluation metrics for cluster analysis. As measuring property or quality of outputs from a clustering algorithm is difficult, human judgment with cluster visualization tools BIBREF17, BIBREF18 are often used. There are unsupervised metrics to measure the quality of a clustering result such as the Calinski-Harabasz score BIBREF19, the Davies-Bouldin index BIBREF20, and the Silhouette coefficients BIBREF21. Complementary to these works that model cross-cluster similarities or relationships, our proposed diversity, density and homogeneity metrics focus on the characteristics of each single cluster, i.e., intra cluster rather than inter cluster relationships.
Proposed Characteristic Metrics
We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.
Our first assumption is, for classification, high-quality training data entail that examples of one class are as differentiable and distinct as possible from another class. From a fine-grained and intra-class perspective, a robust text cluster should be diverse in syntax, which is captured by diversity. And each example should reflect a sufficient signature of the class to which it belongs, that is, each example is representative and contains certain salient features of the class. We define a density metric to account for this aspect. On top of that, examples should also be semantically similar and coherent among each other within a cluster, where homogeneity comes in play.
The more subtle intuition emerges from the inter-class viewpoint. When there are two or more class labels in a text collection, in an ideal scenario, we would expect the homogeneity to be monotonically decreasing. Potentially, the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another. If there is a significant ambiguity between classes, the behavior of the proposed metrics and a possible new metric as a inter-class confusability measurement remain for future work.
In practice, the input is a collection of texts $\lbrace x_1, x_2, ..., x_m\rbrace $, where $x_i$ is a sequence of tokens $x_{i1}, x_{i2}, ..., x_{il}$ denoting a phrase, a sentence, or a paragraph. An embedding method $\mathcal {E}$ then transforms $x_i$ into a vector $\mathcal {E}(x_i)=e_i$ and the characteristic metrics are computed with the embedding vectors. For example,
Note that these embedding vectors often lie in a high-dimensional space, e.g. commonly over 300 dimensions. This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality.
We then assume a set of clusters created over the generated embedding vectors. In classification tasks, the embeddings pertaining to members of a class form a cluster, i.e., in a supervised setting. In an unsupervised setting, we may apply a clustering algorithm to the embeddings. It is worth noting that, in general, the metrics are independent of the assumed underlying grouping method.
Proposed Characteristic Metrics ::: Diversity
Embedding vectors of a given group of texts $\lbrace e_1, ..., e_m\rbrace $ can be treated as a cluster in the high-dimensional embedding space. We propose a diversity metric to estimate the cluster's dispersion or spreadness via a generalized sense of the radius.
Specifically, if a cluster is distributed as a multi-variate Gaussian with a diagonal covariance matrix $\Sigma $, the shape of an isocontour will be an axis-aligned ellipsoid in $\mathbb {R}^{H}$. Such isocontours can be described as:
where $x$ are all possible points in $\mathbb {R}^{H}$ on an isocontour, $c$ is a constant, $\mu $ is a given mean vector with $\mu _j$ being the value along $j$-th axis, and $\sigma ^2_j$ is the variance of the $j$-th axis.
We leverage the geometric interpretation of this formulation and treat the square root of variance, i.e., standard deviation, $\sqrt{\sigma ^2_j}$ as the radius $r_j$ of the ellipsoid along the $j$-th axis. The diversity metric is then defined as the geometric mean of radii across all axes:
where $\sigma _i$ is the standard deviation or square root of the variance along the $i$-th axis.
In practice, to compute a diversity metric, we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values. Note that as the geometric mean acts as a dimensionality normalization, it makes the diversity metric work well in high-dimensional embedding spaces such as BERT.
Proposed Characteristic Metrics ::: Density
Another interesting characteristic is the sparsity of the text embedding cluster. The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space.
Following the assumption mentioned above, a straight-forward definition of the volume can be written as:
up to a constant factor. However, when the dimension goes higher, this formulation easily produces exploding or vanishing density values, i.e., goes to infinity or zero.
To accommodate the impact of high-dimensionality, we impose a dimension normalization. Specifically, we introduce a notion of effective axes, which assumes most variance can be explained or captured in a sub-space of a dimension $\sqrt{H}$. We group all the axes in this sub-space together and compute the geometric mean of their radii as the effective radius. The dimension-normalized volume is then formulated as:
Given a set of embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we define the density metric as:
In practice, the computed density metric values often follow a heavy-tailed distribution, thus sometimes its $\log $ value is reported and denoted as $density (log\-scale)$.
Proposed Characteristic Metrics ::: Homogeneity
The homogeneity metric is proposed to summarize the uniformity of a cluster distribution. That is, how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space. We propose to quantitatively describe homogeneity by building a fully-connected, edge-weighted network, which can be modeled by a Markov chain model. A Markov chain's entropy rate is calculated and normalized to be in $[0, 1]$ range by dividing by the entropy's theoretical upper bound. This output value is defined as the homogeneity metric detailed as follows:
To construct a fully-connected network from the embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we compute their pairwise distances as edge weights, an idea similar to AttriRank BIBREF22. As the Euclidean distance is not a good metric in high-dimensions, we normalize the distance by adding a power $\log (n\_dim)$. We then define a Markov chain model with the weight of $edge(i, j)$ being
and the conditional probability of transition from $i$ to $j$ can be written as
All the transition probabilities $p(i \rightarrow j)$ are from the transition matrix of a Markov chain. An entropy of this Markov chain can be calculated as
where $\nu _i$ is the stationary distribution of the Markov chain. As self-transition probability $p(i \rightarrow i)$ is always zero because of zero distance, there are $(m - 1)$ possible destinations and the entropy's theoretical upper bound becomes
Our proposed homogeneity metric is then normalized into $[0, 1]$ as a uniformity measure:
The intuition is that if some samples are close to each other but far from all the others, the calculated entropy decreases to reflect the unbalanced distribution. In contrast, if each sample can reach other samples within more-or-less the same distances, the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed.
Simulations
To verify that each proposed characteristic metric holds its desirable and intuitive properties, we conduct a series of simulation experiments in 2-dimensional as well as 768-dimensional spaces. The latter has the same dimensionality as the output of our chosen embedding method-BERT, in the following Experiments section.
Simulations ::: Simulation Setup
The base simulation setup is a randomly generated isotropic Gaussian blob that contains $10,000$ data points with the standard deviation along each axis to be $1.0$ and is centered around the origin. All Gaussian blobs are created using make_blobs function in the scikit-learn package.
Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics:
Down-sampling: Down-sample the base cluster to be $\lbrace 90\%, 80\%, ..., 10\%\rbrace $ of its original size. That is, create Gaussian blobs with $\lbrace 9000, ..., 1000\rbrace $ data points;
Varying Spread: Generate Gaussian blobs with standard deviations of each axis to be $\lbrace 2.0, 3.0, ..., 10.0\rbrace $;
Outliers: Add $\lbrace 50, 100, ..., 500\rbrace $ outlier data points, i.e., $\lbrace 0.5\%, ..., 5\%\rbrace $ of the original cluster size, randomly on the surface with a fixed norm or radius;
Multiple Sub-clusters: Along the 1th-axis, with $10,000$ data points in total, create $\lbrace 1, 2, ..., 10\rbrace $ clusters with equal sample sizes but at increasing distance.
For each scenario, we simulate a cluster and compute the characteristic metrics in both 2-dimensional and 768-dimensional spaces. Figure FIGREF17 visualizes each scenario by t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF23. The 768-dimensional simulations are visualized by down-projecting to 50 dimensions via Principal Component Analysis (PCA) followed by t-SNE.
Simulations ::: Simulation Results
Figure FIGREF24 summarizes calculated diversity metrics in the first row, density metrics in the second row, and homogeneity metrics in the third row, for all simulation scenarios.
The diversity metric is robust as its values remain almost the same to the down-sampling of an input cluster. This implies the diversity metric has a desirable property that it is insensitive to the size of inputs. On the other hand, it shows a linear relationship to varying spreads. It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data. With more outliers or more sub-clusters, the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in high-dimensional spaces.
For the density metrics, it exhibits a linear relationship to the size of inputs when down-sampling, which is desired. When increasing spreads, the trend of density metrics corresponds well with human intuition. Note that the density metrics decrease at a much faster rate in higher-dimensional space as log-scale is used in the figure. The density metrics also drop when adding outliers or having multiple distant sub-clusters. This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well. In multiple sub-cluster scenario, the density metric becomes less sensitive in the higher-dimensional space. The reason could be that the sub-clusters are distributed only along one axis and thus have a smaller impact on volume in higher-dimensional spaces.
As random down-sampling or increasing variance of each axis should not affect the uniformity of a cluster distribution, we expect the homogeneity metric remains approximately the same values. And the proposed homogeneity metric indeed demonstrates these ideal properties. Interestingly, for outliers, we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added. This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity.
For multiple sub-clusters, as more sub-clusters are presented, the homogeneity should and does decrease as the data are less and less uniformly distributed in the space.
To sum up, from all simulations, our proposed diversity, density, and homogeneity metrics indeed capture the essence or intuition of dispersion, sparsity, and uniformity in a cluster distribution.
Experiments
The two real-world text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding (SLU).
Experiments ::: Chosen Embedding Method
BERT is a self-supervised language model pretraining approach based on the Transformer BIBREF24, a multi-headed self-attention architecture that can produce different representation vectors for the same token in various sequences, i.e., contextual embeddings.
When pretraining, BERT concatenates two sequences as input, with special tokens $[CLS], [SEP], [EOS]$ denoting the start, separation, and end, respectively. BERT is then pretrained on a large unlabeled corpus with objective-masked language model (MLM), which randomly masks out tokens, and the model predicts the masked tokens. The other classification task is next sentence prediction (NSP). NSP is to predict whether two sequences follow each other in the original text or not.
In this work, we use the pretrained $\text{BERT}_{\text{BASE}}$ which has 12 layers (L), 12 self-attention heads (A), and 768 hidden dimension (H) as the language embedding to compute the proposed data metrics. The off-the-shelf pretrained BERT is obtained from GluonNLP. For each sequence $x_i = (x_{i1}, ..., x_{il})$ with length $l$, BERT takes $[CLS], x_{i1}, ..., x_{il}, [EOS]$ as input and generates embeddings $\lbrace e_{CLS}, e_{i1}, ..., e_{il}, e_{EOS}\rbrace $ at the token level. To obtain the sequence representation, we use a mean pooling over token embeddings:
where $e_i \in \mathbb {R}^{H}$. A text collection $\lbrace x_1, ..., x_m\rbrace $, i.e., a set of token sequences, is then transformed into a group of H-dimensional vectors $\lbrace e_1, ..., e_m\rbrace $.
We compute each metric as described previously, using three BERT layers L1, L6, and L12 as the embedding space, respectively. The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset.
Experiments ::: Experimental Setup
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In both tasks, we used the open-sourced GluonNLP BERT model to perform text classification. For evaluation, sentiment analysis is measured in accuracy, whereas IC and SL are measured in accuracy and F1 score, respectively. BERT is fine-tuned on train/dev sets and evaluated on test sets.
We down-sampled SST-2 and Snips training sets from $100\%$ to $10\%$ with intervals being $10\%$. BERT's performance is reported for each down-sampled setting in Table TABREF29 and Table TABREF30. We used entire test sets for all model evaluations.
To compare, we compute the proposed data metrics, i.e., diversity, density, and homogeneity, on the original and the down-sampled training sets.
Experiments ::: Experimental Results
We will discuss the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments on the two public benchmark datasets, in the following subsections:
Experiments ::: Experimental Results ::: SST-2
In Table TABREF29, the sentiment classification accuracy is $92.66\%$ without down-sampling, which is consistent with the reported GluonNLP BERT model performance on SST-2. It also indicates SST-2 training data are differentiable between label classes, i.e., from the positive class to the negative class, which satisfies our assumption for the characteristic metrics.
Decreasing the training set size does not reduce performance until it is randomly down-sampled to only $20\%$ of the original size. Meanwhile, density and homogeneity metrics also decrease significantly (highlighted in bold in Table TABREF29), implying a clear relationship between these metrics and model performance.
Experiments ::: Experimental Results ::: Snips
In Table TABREF30, the Snips dataset seems to be distinct between IC/SL classes since the IC accurcy and SL F1 are as high as $98.71\%$ and $96.06\%$ without down-sampling, respectively. Similar to SST-2, this implies that Snips training data should also support the inter-class differentiability assumption for our proposed characteristic metrics.
IC accuracy on Snips remains higher than $98\%$ until we down-sample the training set to $20\%$ of the original size. In contrast, SL F1 score is more sensitive to the down-sampling of the training set, as it starts decreasing when down-sampling. When the training set is only $10\%$ left, SL F1 score drops to $87.20\%$.
The diversity metric does not decrease immediately until the training set equals to or is less than $40\%$ of the original set. This implies that random sampling does not impact the diversity, if the sampling rate is greater than $40\%$. The training set is very likely to contain redundant information in terms of text diversity. This is supported by what we observed as model has consistently high IC/SL performances between $40\%$-$100\%$ down-sampling ratios.
Moreover, the biggest drop of density and homogeneity (highlighted in bold in Table TABREF30) highly correlates with the biggest IC/SL drop, at the point the training set size is reduced from $20\%$ to $10\%$. This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets.
Analysis
We calculate and show in Table TABREF35 the Pearson's correlations between the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments in Table TABREF29 and Table TABREF30. Correlations higher than $0.5$ are highlighted in bold. As mentioned before, model performance is highly correlated with density and homogeneity, both are computed on the train set. Diversity is only correlated with Snips SL F1 score at a moderate level.
These are consistent with our simulation results, which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space. However, the simultaneous huge drops of model performance, density, and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when down-sampling. Moreover, results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well.
Conclusions
In this work, we proposed several characteristic metrics to describe the diversity, density, and homogeneity of text collections without using any labels. Pre-trained language embeddings are used to efficiently characterize text datasets. Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks. We would like to apply the diversity, density, and homogeneity metrics for text data augmentation and selection in a semi-supervised manner as our future work. | Unanswerable |
a2103e7fe613549a9db5e65008f33cf2ee0403bd | a2103e7fe613549a9db5e65008f33cf2ee0403bd_0 | Q: What are the country-specific drivers of international development rhetoric?
Text: Introduction
Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .
Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.
The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.
An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.
We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.
The UN General Debate and international development
In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic.
Estimation of topic models
We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.
Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 .
Topics in the UN General Debate
Figure FIGREF4 provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the lift (which gives weight to words that appear less frequently in other topics), and the score (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.
Topic 1 - Security and cooperation in Europe.
The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.
Topic 2 - Economic development and the global system.
This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.
Topic 3 - Nuclear disarmament.
This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.
Topic 4 - Post-conflict development.
This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.
Topic 5 - African independence / decolonisation.
This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.
Topic 6 - Africa.
While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.
Topic 7 - Sustainable development.
This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.
Topic 8 - Functional topic.
This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.
Topic 9 - War.
This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.
Topic 10 - Conflict in the Middle East.
This topic clearly picks up issues related to the Middle East – particularly around peace and conflict in the Middle East.
Topic 11 - Latin America.
This is another topic with a regional focus, picking up on issues related to Latin America.
Topic 12 - Commonwealth.
This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).
Topic 13 - International security.
This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.
Topic 14 - International law.
This topic picks up issues related to international law, particularly connected to territorial disputes.
Topic 15 - Decolonisation.
This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.
Topic 16 - Cold War.
This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.
Based on these topics, we examine Topic 2 and Topic 7 as the principal “international development” topics. While a number of other topics – for example post-conflict development, Africa, Latin America, etc. – are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure FIGREF6 , the word clouds show the 50 words most likely to mentioned in relation to each of the topics.
The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda BIBREF9 .
Figure FIGREF7 calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.
We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure FIGREF8 . The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6).
Explaining the rhetoric
We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.
Figure FIGREF9 demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around $30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around $60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.
Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.
We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.
We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 . Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).
We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset BIBREF11 . Point estimates and 95% confidence intervals are plotted in Figure FIGREF13 . The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.
Finally, we consider regional effects in Figure FIGREF14 . We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.
The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration.
Conclusion
Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance. | wealth , democracy , population, levels of ODA, conflict |
13b36644357870008d70e5601f394ec3c6c07048 | 13b36644357870008d70e5601f394ec3c6c07048_0 | Q: Is the dataset multilingual?
Text: Introduction
Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .
Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.
The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.
An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.
We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.
The UN General Debate and international development
In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic.
Estimation of topic models
We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.
Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 .
Topics in the UN General Debate
Figure FIGREF4 provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the lift (which gives weight to words that appear less frequently in other topics), and the score (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.
Topic 1 - Security and cooperation in Europe.
The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.
Topic 2 - Economic development and the global system.
This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.
Topic 3 - Nuclear disarmament.
This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.
Topic 4 - Post-conflict development.
This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.
Topic 5 - African independence / decolonisation.
This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.
Topic 6 - Africa.
While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.
Topic 7 - Sustainable development.
This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.
Topic 8 - Functional topic.
This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.
Topic 9 - War.
This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.
Topic 10 - Conflict in the Middle East.
This topic clearly picks up issues related to the Middle East – particularly around peace and conflict in the Middle East.
Topic 11 - Latin America.
This is another topic with a regional focus, picking up on issues related to Latin America.
Topic 12 - Commonwealth.
This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).
Topic 13 - International security.
This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.
Topic 14 - International law.
This topic picks up issues related to international law, particularly connected to territorial disputes.
Topic 15 - Decolonisation.
This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.
Topic 16 - Cold War.
This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.
Based on these topics, we examine Topic 2 and Topic 7 as the principal “international development” topics. While a number of other topics – for example post-conflict development, Africa, Latin America, etc. – are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure FIGREF6 , the word clouds show the 50 words most likely to mentioned in relation to each of the topics.
The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda BIBREF9 .
Figure FIGREF7 calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.
We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure FIGREF8 . The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6).
Explaining the rhetoric
We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.
Figure FIGREF9 demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around $30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around $60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.
Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.
We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.
We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 . Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).
We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset BIBREF11 . Point estimates and 95% confidence intervals are plotted in Figure FIGREF13 . The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.
Finally, we consider regional effects in Figure FIGREF14 . We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.
The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration.
Conclusion
Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance. | No |
13b36644357870008d70e5601f394ec3c6c07048 | 13b36644357870008d70e5601f394ec3c6c07048_1 | Q: Is the dataset multilingual?
Text: Introduction
Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .
Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.
The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.
An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.
We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.
The UN General Debate and international development
In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic.
Estimation of topic models
We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.
Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 .
Topics in the UN General Debate
Figure FIGREF4 provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the lift (which gives weight to words that appear less frequently in other topics), and the score (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.
Topic 1 - Security and cooperation in Europe.
The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.
Topic 2 - Economic development and the global system.
This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.
Topic 3 - Nuclear disarmament.
This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.
Topic 4 - Post-conflict development.
This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.
Topic 5 - African independence / decolonisation.
This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.
Topic 6 - Africa.
While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.
Topic 7 - Sustainable development.
This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.
Topic 8 - Functional topic.
This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.
Topic 9 - War.
This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.
Topic 10 - Conflict in the Middle East.
This topic clearly picks up issues related to the Middle East – particularly around peace and conflict in the Middle East.
Topic 11 - Latin America.
This is another topic with a regional focus, picking up on issues related to Latin America.
Topic 12 - Commonwealth.
This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).
Topic 13 - International security.
This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.
Topic 14 - International law.
This topic picks up issues related to international law, particularly connected to territorial disputes.
Topic 15 - Decolonisation.
This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.
Topic 16 - Cold War.
This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.
Based on these topics, we examine Topic 2 and Topic 7 as the principal “international development” topics. While a number of other topics – for example post-conflict development, Africa, Latin America, etc. – are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure FIGREF6 , the word clouds show the 50 words most likely to mentioned in relation to each of the topics.
The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda BIBREF9 .
Figure FIGREF7 calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.
We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure FIGREF8 . The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6).
Explaining the rhetoric
We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.
Figure FIGREF9 demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around $30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around $60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.
Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.
We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.
We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 . Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).
We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset BIBREF11 . Point estimates and 95% confidence intervals are plotted in Figure FIGREF13 . The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.
Finally, we consider regional effects in Figure FIGREF14 . We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.
The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration.
Conclusion
Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance. | No |
e4a19b91b57c006a9086ae07f2d6d6471a8cf0ce | e4a19b91b57c006a9086ae07f2d6d6471a8cf0ce_0 | Q: How are the main international development topics that states raise identified?
Text: Introduction
Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .
Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.
The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.
An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.
We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.
The UN General Debate and international development
In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic.
Estimation of topic models
We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.
Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 .
Topics in the UN General Debate
Figure FIGREF4 provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the lift (which gives weight to words that appear less frequently in other topics), and the score (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.
Topic 1 - Security and cooperation in Europe.
The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.
Topic 2 - Economic development and the global system.
This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.
Topic 3 - Nuclear disarmament.
This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.
Topic 4 - Post-conflict development.
This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.
Topic 5 - African independence / decolonisation.
This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.
Topic 6 - Africa.
While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.
Topic 7 - Sustainable development.
This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.
Topic 8 - Functional topic.
This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.
Topic 9 - War.
This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.
Topic 10 - Conflict in the Middle East.
This topic clearly picks up issues related to the Middle East – particularly around peace and conflict in the Middle East.
Topic 11 - Latin America.
This is another topic with a regional focus, picking up on issues related to Latin America.
Topic 12 - Commonwealth.
This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).
Topic 13 - International security.
This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.
Topic 14 - International law.
This topic picks up issues related to international law, particularly connected to territorial disputes.
Topic 15 - Decolonisation.
This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.
Topic 16 - Cold War.
This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.
Based on these topics, we examine Topic 2 and Topic 7 as the principal “international development” topics. While a number of other topics – for example post-conflict development, Africa, Latin America, etc. – are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure FIGREF6 , the word clouds show the 50 words most likely to mentioned in relation to each of the topics.
The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda BIBREF9 .
Figure FIGREF7 calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.
We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure FIGREF8 . The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6).
Explaining the rhetoric
We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.
Figure FIGREF9 demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around $30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around $60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.
Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.
We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.
We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 . Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).
We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset BIBREF11 . Point estimates and 95% confidence intervals are plotted in Figure FIGREF13 . The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.
Finally, we consider regional effects in Figure FIGREF14 . We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.
The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration.
Conclusion
Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance. | They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. |
fd0ef5a7b6f62d07776bf672579a99c67e61a568 | fd0ef5a7b6f62d07776bf672579a99c67e61a568_0 | Q: What experiments do the authors present to validate their system?
Text: Introduction
QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack.
System description ::: Architecture
As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
System description ::: Bot Development Process
Creating a bot is a 3-step process for a bot developer:
Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data
Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.
Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same.
System description ::: Extraction
The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity.
System description ::: Retrieval And Ranking
QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.
Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases.
System description ::: Retrieval And Ranking ::: Pre-Processing
The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query.
System description ::: Retrieval And Ranking ::: Features
Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:
WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture" in a KB and the end-user asks about “price of table", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:
Distance of 2 words in the WordNet graph
Distance of Lowest Common Hypernym from the root
Knowledge-Base word importance (Local IDFs)
Global word importance (Global IDFs)
This is the most important feature in our model as it has the highest relative feature gain.
CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.
TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps.
System description ::: Retrieval And Ranking ::: Contextual Features
We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:
$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes" and the previous answer is “do you want to know about XYZ", the current query becomes “do you want to know about XYZ yes".
Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits" and its parent question was “know about XYZ", the candidate QA's question is changed to “know about XYZ benefits".
The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information.
System description ::: Retrieval And Ranking ::: Modeling and Training
We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically.
System description ::: Persona Based Chit-Chat
We add support for bot-developers to directly enable handling chit-chat queries like “hi", “thank you", “what's up" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous" personality can be used for a casual bot, whereas a “Professional" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8.
System description ::: Active Learning
The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown.
Evaluation and Insights
QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:
Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.
Around $\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.
25.5% of the knowledge bases use one URL as a source while creation. $\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content.
Demonstration
We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here.
Future Work
The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking. | we measure our system's performance for datasets across various domains, evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs |
071bcb4b054215054f17db64bfd21f17fd9e1a80 | 071bcb4b054215054f17db64bfd21f17fd9e1a80_0 | Q: How does the conversation layer work?
Text: Introduction
QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack.
System description ::: Architecture
As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
System description ::: Bot Development Process
Creating a bot is a 3-step process for a bot developer:
Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data
Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.
Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same.
System description ::: Extraction
The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity.
System description ::: Retrieval And Ranking
QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.
Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases.
System description ::: Retrieval And Ranking ::: Pre-Processing
The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query.
System description ::: Retrieval And Ranking ::: Features
Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:
WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture" in a KB and the end-user asks about “price of table", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:
Distance of 2 words in the WordNet graph
Distance of Lowest Common Hypernym from the root
Knowledge-Base word importance (Local IDFs)
Global word importance (Global IDFs)
This is the most important feature in our model as it has the highest relative feature gain.
CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.
TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps.
System description ::: Retrieval And Ranking ::: Contextual Features
We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:
$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes" and the previous answer is “do you want to know about XYZ", the current query becomes “do you want to know about XYZ yes".
Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits" and its parent question was “know about XYZ", the candidate QA's question is changed to “know about XYZ benefits".
The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information.
System description ::: Retrieval And Ranking ::: Modeling and Training
We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically.
System description ::: Persona Based Chit-Chat
We add support for bot-developers to directly enable handling chit-chat queries like “hi", “thank you", “what's up" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous" personality can be used for a casual bot, whereas a “Professional" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8.
System description ::: Active Learning
The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown.
Evaluation and Insights
QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:
Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.
Around $\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.
25.5% of the knowledge bases use one URL as a source while creation. $\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content.
Demonstration
We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here.
Future Work
The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking. | Unanswerable |
f399d5a8dbeec777a858f81dc4dd33a83ba341a2 | f399d5a8dbeec777a858f81dc4dd33a83ba341a2_0 | Q: What components is the QnAMaker composed of?
Text: Introduction
QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack.
System description ::: Architecture
As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
System description ::: Bot Development Process
Creating a bot is a 3-step process for a bot developer:
Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data
Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.
Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same.
System description ::: Extraction
The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity.
System description ::: Retrieval And Ranking
QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.
Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases.
System description ::: Retrieval And Ranking ::: Pre-Processing
The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query.
System description ::: Retrieval And Ranking ::: Features
Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:
WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture" in a KB and the end-user asks about “price of table", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:
Distance of 2 words in the WordNet graph
Distance of Lowest Common Hypernym from the root
Knowledge-Base word importance (Local IDFs)
Global word importance (Global IDFs)
This is the most important feature in our model as it has the highest relative feature gain.
CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.
TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps.
System description ::: Retrieval And Ranking ::: Contextual Features
We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:
$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes" and the previous answer is “do you want to know about XYZ", the current query becomes “do you want to know about XYZ yes".
Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits" and its parent question was “know about XYZ", the candidate QA's question is changed to “know about XYZ benefits".
The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information.
System description ::: Retrieval And Ranking ::: Modeling and Training
We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically.
System description ::: Persona Based Chit-Chat
We add support for bot-developers to directly enable handling chit-chat queries like “hi", “thank you", “what's up" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous" personality can be used for a casual bot, whereas a “Professional" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8.
System description ::: Active Learning
The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown.
Evaluation and Insights
QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:
Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.
Around $\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.
25.5% of the knowledge bases use one URL as a source while creation. $\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content.
Demonstration
We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here.
Future Work
The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking. | QnAMaker Portal, QnaMaker Management APIs, Azure Search Index, QnaMaker WebApp, Bot |
f399d5a8dbeec777a858f81dc4dd33a83ba341a2 | f399d5a8dbeec777a858f81dc4dd33a83ba341a2_1 | Q: What components is the QnAMaker composed of?
Text: Introduction
QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack.
System description ::: Architecture
As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
System description ::: Bot Development Process
Creating a bot is a 3-step process for a bot developer:
Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data
Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.
Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same.
System description ::: Extraction
The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity.
System description ::: Retrieval And Ranking
QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.
Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases.
System description ::: Retrieval And Ranking ::: Pre-Processing
The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query.
System description ::: Retrieval And Ranking ::: Features
Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:
WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture" in a KB and the end-user asks about “price of table", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:
Distance of 2 words in the WordNet graph
Distance of Lowest Common Hypernym from the root
Knowledge-Base word importance (Local IDFs)
Global word importance (Global IDFs)
This is the most important feature in our model as it has the highest relative feature gain.
CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.
TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps.
System description ::: Retrieval And Ranking ::: Contextual Features
We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:
$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes" and the previous answer is “do you want to know about XYZ", the current query becomes “do you want to know about XYZ yes".
Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits" and its parent question was “know about XYZ", the candidate QA's question is changed to “know about XYZ benefits".
The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information.
System description ::: Retrieval And Ranking ::: Modeling and Training
We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically.
System description ::: Persona Based Chit-Chat
We add support for bot-developers to directly enable handling chit-chat queries like “hi", “thank you", “what's up" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous" personality can be used for a casual bot, whereas a “Professional" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8.
System description ::: Active Learning
The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown.
Evaluation and Insights
QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:
Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.
Around $\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.
25.5% of the knowledge bases use one URL as a source while creation. $\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content.
Demonstration
We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here.
Future Work
The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking. | QnAMaker Portal, QnaMaker Management APIs, Azure Search Index, QnaMaker WebApp, Bot |
d28260b5565d9246831e8dbe594d4f6211b60237 | d28260b5565d9246831e8dbe594d4f6211b60237_0 | Q: How they measure robustness in experiments?
Text: Introduction
Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.
In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9.
Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12.
We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data.
PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features.
Plackett-Luce Model
Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\mathbf {r}=(r_{1},r_{2}\ldots r_{N})$ be $N$ horses with a probability distribution $\mathcal {P}$ on their abilities to win a game, and a rank $\mathbf {\pi }=(\pi (1),\pi (2)\ldots \pi (|\mathbf {\pi }|))$ of horses can be understood as a generative procedure, where $\pi (j)$ denotes the index of the horse in the $j$th position.
In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\pi $, the probability of generating the champion is $p(r_{\pi (1)})$. Then the horse $r_{\pi (1)}$ is removed from the candidate pool.
In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\pi (1)})$ is the normalization. Then the runner-up in the rank $\pi $, the $\pi (2)$th horse, is chosen at the probability $p(r_{\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially.
This procedure iterates to the last rank in $\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\mathbf {\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\pi $ is given as follows
where $Z_{j}=1-\sum _{t=1}^{j-1}p(r_{\pi (t)})$.
We offer a toy example (Table TABREF3) to demonstrate this procedure.
Theorem 1 The permutation probabilities $p(\mathbf {\pi })$ form a probability distribution over a set of permutations $\Omega _{\pi }$. For example, for each $\mathbf {\pi }\in \Omega _{\pi }$, we have $p(\mathbf {\pi })>0$, and $\sum _{\pi \in \Omega _{\pi }}p(\mathbf {\pi })=1$.
We have to note that, $\Omega _{\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\mathbf {\pi }|\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\pi |)$.
Theorem 2 Given any two permutations $\mathbf {\pi }$ and $\mathbf {\pi }\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\pi (p)=\mathbf {\pi }\prime (q)$ and $\pi (q)=\mathbf {\pi }\prime (p)$. If $p(\pi (p))>p(\pi (q))$, then $p(\pi )>p(\pi \prime )$.
In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability.
This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses.
Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems.
Plackett-Luce Model in Statistical Machine Translation
In SMT, let $\mathbf {f}=(f_{1},f_{2}\ldots )$ denote source sentences, and $\mathbf {e}=(\lbrace e_{1,1},\ldots \rbrace ,\lbrace e_{2,1},\ldots \rbrace \ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.
We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses.
The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient:
where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence.
The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18.
Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample
The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.
The greater, the richer. In practice, we find a rough threshold of r is 5.
In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold.
Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\mathbf {e}$, first, the $\frac{m}{3}$ best hypotheses and the $\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\propto \exp (h(e_{i})^{T}w)$, where $\mathbf {w}$ is an initial weight from last iteration.
Evaluation
We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively.
In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\le $src-side$\le $b,c$\le $tgt-side$\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\times $ 55 extra group features.
In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus.
Evaluation ::: Plackett-Luce Model for SMT Tuning
We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms.
In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training.
All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing.
The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information.
Evaluation ::: Plackett-Luce Model for SMT Reranking
After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.
First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.
Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin.
This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree. | We empirically provide a formula to measure the richness in the scenario of machine translation. |
d28260b5565d9246831e8dbe594d4f6211b60237 | d28260b5565d9246831e8dbe594d4f6211b60237_1 | Q: How they measure robustness in experiments?
Text: Introduction
Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.
In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9.
Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12.
We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data.
PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features.
Plackett-Luce Model
Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\mathbf {r}=(r_{1},r_{2}\ldots r_{N})$ be $N$ horses with a probability distribution $\mathcal {P}$ on their abilities to win a game, and a rank $\mathbf {\pi }=(\pi (1),\pi (2)\ldots \pi (|\mathbf {\pi }|))$ of horses can be understood as a generative procedure, where $\pi (j)$ denotes the index of the horse in the $j$th position.
In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\pi $, the probability of generating the champion is $p(r_{\pi (1)})$. Then the horse $r_{\pi (1)}$ is removed from the candidate pool.
In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\pi (1)})$ is the normalization. Then the runner-up in the rank $\pi $, the $\pi (2)$th horse, is chosen at the probability $p(r_{\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially.
This procedure iterates to the last rank in $\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\mathbf {\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\pi $ is given as follows
where $Z_{j}=1-\sum _{t=1}^{j-1}p(r_{\pi (t)})$.
We offer a toy example (Table TABREF3) to demonstrate this procedure.
Theorem 1 The permutation probabilities $p(\mathbf {\pi })$ form a probability distribution over a set of permutations $\Omega _{\pi }$. For example, for each $\mathbf {\pi }\in \Omega _{\pi }$, we have $p(\mathbf {\pi })>0$, and $\sum _{\pi \in \Omega _{\pi }}p(\mathbf {\pi })=1$.
We have to note that, $\Omega _{\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\mathbf {\pi }|\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\pi |)$.
Theorem 2 Given any two permutations $\mathbf {\pi }$ and $\mathbf {\pi }\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\pi (p)=\mathbf {\pi }\prime (q)$ and $\pi (q)=\mathbf {\pi }\prime (p)$. If $p(\pi (p))>p(\pi (q))$, then $p(\pi )>p(\pi \prime )$.
In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability.
This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses.
Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems.
Plackett-Luce Model in Statistical Machine Translation
In SMT, let $\mathbf {f}=(f_{1},f_{2}\ldots )$ denote source sentences, and $\mathbf {e}=(\lbrace e_{1,1},\ldots \rbrace ,\lbrace e_{2,1},\ldots \rbrace \ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.
We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses.
The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient:
where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence.
The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18.
Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample
The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.
The greater, the richer. In practice, we find a rough threshold of r is 5.
In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold.
Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\mathbf {e}$, first, the $\frac{m}{3}$ best hypotheses and the $\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\propto \exp (h(e_{i})^{T}w)$, where $\mathbf {w}$ is an initial weight from last iteration.
Evaluation
We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively.
In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\le $src-side$\le $b,c$\le $tgt-side$\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\times $ 55 extra group features.
In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus.
Evaluation ::: Plackett-Luce Model for SMT Tuning
We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms.
In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training.
All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing.
The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information.
Evaluation ::: Plackett-Luce Model for SMT Reranking
After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.
First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.
Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin.
This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree. | boost the training BLEU very greatly, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$ |
8670989ca39214eda6c1d1d272457a3f3a92818b | 8670989ca39214eda6c1d1d272457a3f3a92818b_0 | Q: Is new method inferior in terms of robustness to MIRAs in experiments?
Text: Introduction
Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.
In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9.
Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12.
We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data.
PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features.
Plackett-Luce Model
Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\mathbf {r}=(r_{1},r_{2}\ldots r_{N})$ be $N$ horses with a probability distribution $\mathcal {P}$ on their abilities to win a game, and a rank $\mathbf {\pi }=(\pi (1),\pi (2)\ldots \pi (|\mathbf {\pi }|))$ of horses can be understood as a generative procedure, where $\pi (j)$ denotes the index of the horse in the $j$th position.
In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\pi $, the probability of generating the champion is $p(r_{\pi (1)})$. Then the horse $r_{\pi (1)}$ is removed from the candidate pool.
In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\pi (1)})$ is the normalization. Then the runner-up in the rank $\pi $, the $\pi (2)$th horse, is chosen at the probability $p(r_{\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially.
This procedure iterates to the last rank in $\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\mathbf {\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\pi $ is given as follows
where $Z_{j}=1-\sum _{t=1}^{j-1}p(r_{\pi (t)})$.
We offer a toy example (Table TABREF3) to demonstrate this procedure.
Theorem 1 The permutation probabilities $p(\mathbf {\pi })$ form a probability distribution over a set of permutations $\Omega _{\pi }$. For example, for each $\mathbf {\pi }\in \Omega _{\pi }$, we have $p(\mathbf {\pi })>0$, and $\sum _{\pi \in \Omega _{\pi }}p(\mathbf {\pi })=1$.
We have to note that, $\Omega _{\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\mathbf {\pi }|\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\pi |)$.
Theorem 2 Given any two permutations $\mathbf {\pi }$ and $\mathbf {\pi }\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\pi (p)=\mathbf {\pi }\prime (q)$ and $\pi (q)=\mathbf {\pi }\prime (p)$. If $p(\pi (p))>p(\pi (q))$, then $p(\pi )>p(\pi \prime )$.
In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability.
This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses.
Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems.
Plackett-Luce Model in Statistical Machine Translation
In SMT, let $\mathbf {f}=(f_{1},f_{2}\ldots )$ denote source sentences, and $\mathbf {e}=(\lbrace e_{1,1},\ldots \rbrace ,\lbrace e_{2,1},\ldots \rbrace \ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.
We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses.
The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient:
where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence.
The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18.
Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample
The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.
The greater, the richer. In practice, we find a rough threshold of r is 5.
In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold.
Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\mathbf {e}$, first, the $\frac{m}{3}$ best hypotheses and the $\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\propto \exp (h(e_{i})^{T}w)$, where $\mathbf {w}$ is an initial weight from last iteration.
Evaluation
We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively.
In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\le $src-side$\le $b,c$\le $tgt-side$\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\times $ 55 extra group features.
In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus.
Evaluation ::: Plackett-Luce Model for SMT Tuning
We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms.
In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training.
All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing.
The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information.
Evaluation ::: Plackett-Luce Model for SMT Reranking
After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.
First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.
Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin.
This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree. | Unanswerable |
923b12c0a50b0ee22237929559fad0903a098b7b | 923b12c0a50b0ee22237929559fad0903a098b7b_0 | Q: What experiments with large-scale features are performed?
Text: Introduction
Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.
In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9.
Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12.
We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data.
PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features.
Plackett-Luce Model
Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\mathbf {r}=(r_{1},r_{2}\ldots r_{N})$ be $N$ horses with a probability distribution $\mathcal {P}$ on their abilities to win a game, and a rank $\mathbf {\pi }=(\pi (1),\pi (2)\ldots \pi (|\mathbf {\pi }|))$ of horses can be understood as a generative procedure, where $\pi (j)$ denotes the index of the horse in the $j$th position.
In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\pi $, the probability of generating the champion is $p(r_{\pi (1)})$. Then the horse $r_{\pi (1)}$ is removed from the candidate pool.
In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\pi (1)})$ is the normalization. Then the runner-up in the rank $\pi $, the $\pi (2)$th horse, is chosen at the probability $p(r_{\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially.
This procedure iterates to the last rank in $\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\mathbf {\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\pi $ is given as follows
where $Z_{j}=1-\sum _{t=1}^{j-1}p(r_{\pi (t)})$.
We offer a toy example (Table TABREF3) to demonstrate this procedure.
Theorem 1 The permutation probabilities $p(\mathbf {\pi })$ form a probability distribution over a set of permutations $\Omega _{\pi }$. For example, for each $\mathbf {\pi }\in \Omega _{\pi }$, we have $p(\mathbf {\pi })>0$, and $\sum _{\pi \in \Omega _{\pi }}p(\mathbf {\pi })=1$.
We have to note that, $\Omega _{\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\mathbf {\pi }|\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\pi |)$.
Theorem 2 Given any two permutations $\mathbf {\pi }$ and $\mathbf {\pi }\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\pi (p)=\mathbf {\pi }\prime (q)$ and $\pi (q)=\mathbf {\pi }\prime (p)$. If $p(\pi (p))>p(\pi (q))$, then $p(\pi )>p(\pi \prime )$.
In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability.
This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses.
Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems.
Plackett-Luce Model in Statistical Machine Translation
In SMT, let $\mathbf {f}=(f_{1},f_{2}\ldots )$ denote source sentences, and $\mathbf {e}=(\lbrace e_{1,1},\ldots \rbrace ,\lbrace e_{2,1},\ldots \rbrace \ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.
We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses.
The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient:
where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence.
The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18.
Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample
The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.
The greater, the richer. In practice, we find a rough threshold of r is 5.
In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold.
Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\mathbf {e}$, first, the $\frac{m}{3}$ best hypotheses and the $\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\propto \exp (h(e_{i})^{T}w)$, where $\mathbf {w}$ is an initial weight from last iteration.
Evaluation
We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively.
In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\le $src-side$\le $b,c$\le $tgt-side$\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\times $ 55 extra group features.
In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus.
Evaluation ::: Plackett-Luce Model for SMT Tuning
We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms.
In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training.
All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing.
The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information.
Evaluation ::: Plackett-Luce Model for SMT Reranking
After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.
First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.
Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin.
This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree. | Plackett-Luce Model for SMT Reranking |
67131c15aceeb51ae1d3b2b8241c8750a19cca8e | 67131c15aceeb51ae1d3b2b8241c8750a19cca8e_0 | Q: Which ASR system(s) is used in this work?
Text: Introduction
Currently, voice-controlled smart devices are widely used in multiple areas to fulfill various tasks, e.g. playing music, acquiring weather information and booking tickets. The SLU system employs several modules to enable the understanding of the semantics of the input speeches. When there is an incoming speech, the ASR module picks it up and attempts to transcribe the speech. An ASR model could generate multiple interpretations for most speeches, which can be ranked by their associated confidence scores. Among the $n$-best hypotheses, the top-1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification, intent classification and named entity recognition (slot tagging). Multi-domain NLU modules are usually designed hierarchically BIBREF0. For one incoming utterance, NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domain-specific.
In spite of impressive development on the current SLU pipeline, the interpretation of speech could still contain errors. Sometimes the top-1 recognition hypothesis of ASR module is ungrammatical or implausible and far from the ground-truth transcription BIBREF1, BIBREF2. Among those cases, we find one interpretation exact matching with or more similar to transcription can be included in the remaining hypotheses ($2^{nd}- n^{th}$).
To illustrate the value of the $2^{nd}- n^{th}$ hypotheses, we count the frequency of exact matching and more similar (smaller edit distance compared to the 1st hypothesis) to transcription for different positions of the $n$-best hypotheses list. Table TABREF1 exhibits the results. For the explored dataset, we only collect the top 5 interpretations for each utterance ($n = 5$). Notably, when the correct recognition exists among the 5 best hypotheses, 50% of the time (sum of the first row's percentages) it occurs among the $2^{nd}-5^{th}$ positions. Moreover, as shown by the second row in Table TABREF1, compared to the top recognition hypothesis, the other hypotheses can sometimes be more similar to the transcription.
Over the past few years, we have observed the success of reranking the $n$-best hypotheses BIBREF1, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 before feeding the best interpretation to the NLU module. These approaches propose the reranking framework by involving morphological, lexical or syntactic features BIBREF8, BIBREF9, BIBREF10, speech recognition features like confidence score BIBREF1, BIBREF4, and other features like number of tokens, rank position BIBREF1. They are effective to select the best from the hypotheses list and reduce the word error rate (WER) BIBREF11 of speech recognition.
Those reranking models could benefit the first two cases in Table TABREF2 when there is an utterance matching with transcription. However, in other cases like the third row, it is hard to integrate the fragmented information in multiple hypotheses.
This paper proposes various methods integrating $n$-best hypotheses to tackle the problem. To the best of our knowledge, this is the first study that attempts to collectively exploit the $n$-best speech interpretations in the SLU system. This paper serves as the basis of our $n$-best-hypotheses-based SLU system, focusing on the methods of integration for the hypotheses. Since further improvements of the integration framework require considerable setup and descriptions, where jointly optimized tasks (e.g. transcription reconstruction) trained with multiple ways (multitask BIBREF12, multistage learning BIBREF13) and more features (confidence score, rank position, etc.) are involved, we leave those to a subsequent article.
This paper is organized as follows. Section SECREF2 introduces the Baseline, Oracle and Direct models. Section SECREF3 describes proposed ways to integrate $n$-best hypotheses during training. The experimental setup and results are described in Section SECREF4. Section SECREF5 contains conclusions and future work.
Baseline, Oracle and Direct Models ::: Baseline and Oracle
The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.
For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\tilde{t})$, where $p(\tilde{t}) \leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\text{ASR}\ 1^{st}\ \text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription.
Baseline, Oracle and Direct Models ::: Direct Models
Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):
Majority Vote. We apply the BM model on each hypothesis independently and combine the predictions by picking the majority predicted label, i.e. Music.
Sort by Score. After parallel evaluation on all hypotheses, sort the prediction by the corresponding confidence score and choose the one with the highest score, i.e. Video.
Rerank (Oracle). Since the current rerank models (e.g., BIBREF1, BIBREF3, BIBREF4) attempt to select the hypothesis most similar to transcription, we propose the Rerank (Oracle), which picks the hypothesis with the smallest edit distance to transcription (assume it is the $a$-th best) during evaluation and uses its corresponding prediction.
Integration of N-BEST Hypotheses
All the above mentioned models apply the BM trained on one interpretation (transcription). Their abilities to take advantage of multiple interpretations are actually not trained. As a further step, we propose multiple ways to integrate the $n$-best hypotheses during training. The explored methods can be divided into two groups as shown in Fig. FIGREF11. Let $H_1, H_2,..., H_n $ denote all the hypotheses from ASR and $bp_{H_k, i} \in BPs$ denotes the $i$-th pair of bytes (BP) in the $k^{th}$ best hypothesis. The model parameters associated with the two possible ways both contain: embedding $e_{bp}$ for pairs of bytes, BiLSTM parameters $\theta $ and MLP parameters $W, b$.
Integration of N-BEST Hypotheses ::: Hypothesized Text Concatenation
The basic integration method (Combined Sentence) concatenates the $n$-best hypothesized text. We separate hypotheses with a special delimiter ($<$SEP$>$). We assume BPE totally produces $m$ BPs (delimiters are not split during encoding). Suppose the $n^{th}$ hypothesis has $j$ pairs. The entire model can be formulated as:
In Eqn. DISPLAY_FORM13, the connected hypotheses and separators are encoded via BiLSTM to a sequence of hidden state vectors. Each hidden state vector, e.g. $h_1$, is the concatenation of forward $h_{1f}$ and backward $h_{1b}$ states. The concatenation of the last state of the forward and backward LSTM forms the output vector of BiLSTM (concatenation denoted as $[,]$). Then, in Eqn. DISPLAY_FORM14, the MLP module defines the probability of a specific tag (domain or intent) $\tilde{t}$ as the normalized activation ($\sigma $) output after linear transformation of the output vector.
Integration of N-BEST Hypotheses ::: Hypothesis Embedding Concatenation
The concatenation of hypothesized text leverages the $n$-best list by transferring information among hypotheses in an embedding framework, BiLSTM. However, since all the layers have access to both the preceding and subsequent information, the embedding among $n$-bests will influence each other, which confuses the embedding and makes the whole framework sensitive to the noise in hypotheses.
As the second group of integration approaches, we develop models, PoolingAvg/Max, on the concatenation of hypothesis embedding, which isolate the embedding process among hypotheses and summarize the features by a pooling layer. For each hypothesis (e.g., $i^{th}$ best in Eqn. DISPLAY_FORM16 with $j$ pairs of bytes), we could get a sequence of hidden states from BiLSTM and obtain its final output state by concatenating the first and last hidden state ($h_{output_i}$ in Eqn. DISPLAY_FORM17). Then, we stack all the output states vertically as shown in Eqn. SECREF15. Note that in the real data, we will not always have a fixed size of hypotheses list. For a list with $r$ ($<n$) interpretations, we get the embedding for each of them and pad with the embedding of the first best hypothesis until a fixed size $n$. When $r\ge n$, we only stack the top $n$ embeddings. We employ $h_{output_1}$ for padding to enhance the influence of the top 1 hypothesis, which is more reliable. Finally, one unified representation could be achieved via Pooling (Max/Avg pooling with $n$ by 1 sliding window and stride 1) on the concatenation and one score could be produced per possible tag for the given task.
Experiment ::: Dataset
We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains.
Experiment ::: Performance on Entire Test Set
Table TABREF24 shows the relative error reduction (RErr) of Baseline, Oracle and our proposed models on the entire test set ($\sim $ 300K utterances) for multi-class domain classification. We can see among all the direct methods, predicting based on the hypothesis most similar to the transcription (Rerank (Oracle)) is the best.
As for the other models attempting to integrate the $n$-bests during training, PoolingAvg gets the highest relative improvement, 14.29%. It as well turns out that all the integration methods outperform direct models drastically. This shows that having access to $n$-best hypotheses during training is crucial for the quality of the predicted semantics.
Experiment ::: Performance Comparison among Various Subsets
To further detect the reason for improvements, we split the test set into two parts based on whether ASR first best agrees with transcription and evaluate separately. Comparing Table TABREF26 and Table TABREF27, obviously the benefits of using multiple hypotheses are mainly gained when ASR 1st best disagrees with the transcription. When ASR 1st best agrees with transcription, the proposed integration models can also keep the performance. Under that condition, we can still improve a little (3.56%) because, by introducing multiple ASR hypotheses, we could have more information and when the transcription/ASR 1st best does not appear in the training set's transcriptions, its $n$-bests list may have similar hypotheses included in the training set's $n$-bests. Then, our integration model trained on $n$-best hypotheses as well has clue to predict. The series of comparisons reveal that our approaches integrating the hypotheses are robust to the ASR errors and whenever the ASR model makes mistakes, we can outperform more significantly.
Experiment ::: Improvements on Different Domains and Different Numbers of Hypotheses
Among all the 23 domains, we choose 8 popular domains for further comparisons between the Baseline and the best model of Table TABREF24, PoolingAvg. Fig. FIGREF29 exhibits the results. We could find the PoolingAvg consistently improves the accuracy for all 8 domains.
In the previous experiments, the number of utilized hypotheses for each utterance during evaluation is five, which means we use the top 5 interpretations when the size of ASR recognition list is not smaller than 5 and use all the interpretations otherwise. Changing the number of hypotheses while evaluation, Fig. FIGREF30 shows a monotonic increase with the access to more hypotheses for the PoolingAvg and PoolingMax (Sort by Score is shown because it is the best achievable direct model while the Rerank (Oracle) is not realistic). The growth becomes gentle after four hypotheses are leveraged.
Experiment ::: Intent Classification
Since another downstream task, intent classification, is similar to domain classification, we just show the best model in domain classification, PoolingAvg, on domain-specific intent classification for three popular domains due to space limit. As Table TABREF32 shows, the margins of using multiple hypotheses with PoolingAvg are significant as well.
Conclusions and Future Work
This paper improves the SLU system robustness to ASR errors by integrating $n$-best hypotheses in different ways, e.g. the aggregation of predictions from hypotheses or the concatenation of hypothesis text or embedding. We can achieve significant classification accuracy improvements over production-quality baselines on domain and intent classifications, 14% to 25% relative gains. The improvement is more significant for a subset of testing data where ASR first best is different from transcription. We also observe that with more hypotheses utilized, the performance can be further improved. In the future, we aim to employ additional features (e.g. confidence scores for hypotheses or tokens) to integrate $n$-bests more efficiently, where we can train a function $f$ to obtain a weight for each hypothesis embedding before pooling. Another direction is using deep learning framework to embed the word lattice BIBREF16 or confusion network BIBREF17, BIBREF18, which can provide a compact representation of multiple hypotheses and more information like times, in the SLU system.
Acknowledgements
We would like to thank Junghoo (John) Cho for proofreading. | Oracle |
579a0603ec56fc2b4aa8566810041dbb0cd7b5e7 | 579a0603ec56fc2b4aa8566810041dbb0cd7b5e7_0 | Q: What are the series of simple models?
Text: Introduction
Currently, voice-controlled smart devices are widely used in multiple areas to fulfill various tasks, e.g. playing music, acquiring weather information and booking tickets. The SLU system employs several modules to enable the understanding of the semantics of the input speeches. When there is an incoming speech, the ASR module picks it up and attempts to transcribe the speech. An ASR model could generate multiple interpretations for most speeches, which can be ranked by their associated confidence scores. Among the $n$-best hypotheses, the top-1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification, intent classification and named entity recognition (slot tagging). Multi-domain NLU modules are usually designed hierarchically BIBREF0. For one incoming utterance, NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domain-specific.
In spite of impressive development on the current SLU pipeline, the interpretation of speech could still contain errors. Sometimes the top-1 recognition hypothesis of ASR module is ungrammatical or implausible and far from the ground-truth transcription BIBREF1, BIBREF2. Among those cases, we find one interpretation exact matching with or more similar to transcription can be included in the remaining hypotheses ($2^{nd}- n^{th}$).
To illustrate the value of the $2^{nd}- n^{th}$ hypotheses, we count the frequency of exact matching and more similar (smaller edit distance compared to the 1st hypothesis) to transcription for different positions of the $n$-best hypotheses list. Table TABREF1 exhibits the results. For the explored dataset, we only collect the top 5 interpretations for each utterance ($n = 5$). Notably, when the correct recognition exists among the 5 best hypotheses, 50% of the time (sum of the first row's percentages) it occurs among the $2^{nd}-5^{th}$ positions. Moreover, as shown by the second row in Table TABREF1, compared to the top recognition hypothesis, the other hypotheses can sometimes be more similar to the transcription.
Over the past few years, we have observed the success of reranking the $n$-best hypotheses BIBREF1, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 before feeding the best interpretation to the NLU module. These approaches propose the reranking framework by involving morphological, lexical or syntactic features BIBREF8, BIBREF9, BIBREF10, speech recognition features like confidence score BIBREF1, BIBREF4, and other features like number of tokens, rank position BIBREF1. They are effective to select the best from the hypotheses list and reduce the word error rate (WER) BIBREF11 of speech recognition.
Those reranking models could benefit the first two cases in Table TABREF2 when there is an utterance matching with transcription. However, in other cases like the third row, it is hard to integrate the fragmented information in multiple hypotheses.
This paper proposes various methods integrating $n$-best hypotheses to tackle the problem. To the best of our knowledge, this is the first study that attempts to collectively exploit the $n$-best speech interpretations in the SLU system. This paper serves as the basis of our $n$-best-hypotheses-based SLU system, focusing on the methods of integration for the hypotheses. Since further improvements of the integration framework require considerable setup and descriptions, where jointly optimized tasks (e.g. transcription reconstruction) trained with multiple ways (multitask BIBREF12, multistage learning BIBREF13) and more features (confidence score, rank position, etc.) are involved, we leave those to a subsequent article.
This paper is organized as follows. Section SECREF2 introduces the Baseline, Oracle and Direct models. Section SECREF3 describes proposed ways to integrate $n$-best hypotheses during training. The experimental setup and results are described in Section SECREF4. Section SECREF5 contains conclusions and future work.
Baseline, Oracle and Direct Models ::: Baseline and Oracle
The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.
For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\tilde{t})$, where $p(\tilde{t}) \leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\text{ASR}\ 1^{st}\ \text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription.
Baseline, Oracle and Direct Models ::: Direct Models
Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):
Majority Vote. We apply the BM model on each hypothesis independently and combine the predictions by picking the majority predicted label, i.e. Music.
Sort by Score. After parallel evaluation on all hypotheses, sort the prediction by the corresponding confidence score and choose the one with the highest score, i.e. Video.
Rerank (Oracle). Since the current rerank models (e.g., BIBREF1, BIBREF3, BIBREF4) attempt to select the hypothesis most similar to transcription, we propose the Rerank (Oracle), which picks the hypothesis with the smallest edit distance to transcription (assume it is the $a$-th best) during evaluation and uses its corresponding prediction.
Integration of N-BEST Hypotheses
All the above mentioned models apply the BM trained on one interpretation (transcription). Their abilities to take advantage of multiple interpretations are actually not trained. As a further step, we propose multiple ways to integrate the $n$-best hypotheses during training. The explored methods can be divided into two groups as shown in Fig. FIGREF11. Let $H_1, H_2,..., H_n $ denote all the hypotheses from ASR and $bp_{H_k, i} \in BPs$ denotes the $i$-th pair of bytes (BP) in the $k^{th}$ best hypothesis. The model parameters associated with the two possible ways both contain: embedding $e_{bp}$ for pairs of bytes, BiLSTM parameters $\theta $ and MLP parameters $W, b$.
Integration of N-BEST Hypotheses ::: Hypothesized Text Concatenation
The basic integration method (Combined Sentence) concatenates the $n$-best hypothesized text. We separate hypotheses with a special delimiter ($<$SEP$>$). We assume BPE totally produces $m$ BPs (delimiters are not split during encoding). Suppose the $n^{th}$ hypothesis has $j$ pairs. The entire model can be formulated as:
In Eqn. DISPLAY_FORM13, the connected hypotheses and separators are encoded via BiLSTM to a sequence of hidden state vectors. Each hidden state vector, e.g. $h_1$, is the concatenation of forward $h_{1f}$ and backward $h_{1b}$ states. The concatenation of the last state of the forward and backward LSTM forms the output vector of BiLSTM (concatenation denoted as $[,]$). Then, in Eqn. DISPLAY_FORM14, the MLP module defines the probability of a specific tag (domain or intent) $\tilde{t}$ as the normalized activation ($\sigma $) output after linear transformation of the output vector.
Integration of N-BEST Hypotheses ::: Hypothesis Embedding Concatenation
The concatenation of hypothesized text leverages the $n$-best list by transferring information among hypotheses in an embedding framework, BiLSTM. However, since all the layers have access to both the preceding and subsequent information, the embedding among $n$-bests will influence each other, which confuses the embedding and makes the whole framework sensitive to the noise in hypotheses.
As the second group of integration approaches, we develop models, PoolingAvg/Max, on the concatenation of hypothesis embedding, which isolate the embedding process among hypotheses and summarize the features by a pooling layer. For each hypothesis (e.g., $i^{th}$ best in Eqn. DISPLAY_FORM16 with $j$ pairs of bytes), we could get a sequence of hidden states from BiLSTM and obtain its final output state by concatenating the first and last hidden state ($h_{output_i}$ in Eqn. DISPLAY_FORM17). Then, we stack all the output states vertically as shown in Eqn. SECREF15. Note that in the real data, we will not always have a fixed size of hypotheses list. For a list with $r$ ($<n$) interpretations, we get the embedding for each of them and pad with the embedding of the first best hypothesis until a fixed size $n$. When $r\ge n$, we only stack the top $n$ embeddings. We employ $h_{output_1}$ for padding to enhance the influence of the top 1 hypothesis, which is more reliable. Finally, one unified representation could be achieved via Pooling (Max/Avg pooling with $n$ by 1 sliding window and stride 1) on the concatenation and one score could be produced per possible tag for the given task.
Experiment ::: Dataset
We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains.
Experiment ::: Performance on Entire Test Set
Table TABREF24 shows the relative error reduction (RErr) of Baseline, Oracle and our proposed models on the entire test set ($\sim $ 300K utterances) for multi-class domain classification. We can see among all the direct methods, predicting based on the hypothesis most similar to the transcription (Rerank (Oracle)) is the best.
As for the other models attempting to integrate the $n$-bests during training, PoolingAvg gets the highest relative improvement, 14.29%. It as well turns out that all the integration methods outperform direct models drastically. This shows that having access to $n$-best hypotheses during training is crucial for the quality of the predicted semantics.
Experiment ::: Performance Comparison among Various Subsets
To further detect the reason for improvements, we split the test set into two parts based on whether ASR first best agrees with transcription and evaluate separately. Comparing Table TABREF26 and Table TABREF27, obviously the benefits of using multiple hypotheses are mainly gained when ASR 1st best disagrees with the transcription. When ASR 1st best agrees with transcription, the proposed integration models can also keep the performance. Under that condition, we can still improve a little (3.56%) because, by introducing multiple ASR hypotheses, we could have more information and when the transcription/ASR 1st best does not appear in the training set's transcriptions, its $n$-bests list may have similar hypotheses included in the training set's $n$-bests. Then, our integration model trained on $n$-best hypotheses as well has clue to predict. The series of comparisons reveal that our approaches integrating the hypotheses are robust to the ASR errors and whenever the ASR model makes mistakes, we can outperform more significantly.
Experiment ::: Improvements on Different Domains and Different Numbers of Hypotheses
Among all the 23 domains, we choose 8 popular domains for further comparisons between the Baseline and the best model of Table TABREF24, PoolingAvg. Fig. FIGREF29 exhibits the results. We could find the PoolingAvg consistently improves the accuracy for all 8 domains.
In the previous experiments, the number of utilized hypotheses for each utterance during evaluation is five, which means we use the top 5 interpretations when the size of ASR recognition list is not smaller than 5 and use all the interpretations otherwise. Changing the number of hypotheses while evaluation, Fig. FIGREF30 shows a monotonic increase with the access to more hypotheses for the PoolingAvg and PoolingMax (Sort by Score is shown because it is the best achievable direct model while the Rerank (Oracle) is not realistic). The growth becomes gentle after four hypotheses are leveraged.
Experiment ::: Intent Classification
Since another downstream task, intent classification, is similar to domain classification, we just show the best model in domain classification, PoolingAvg, on domain-specific intent classification for three popular domains due to space limit. As Table TABREF32 shows, the margins of using multiple hypotheses with PoolingAvg are significant as well.
Conclusions and Future Work
This paper improves the SLU system robustness to ASR errors by integrating $n$-best hypotheses in different ways, e.g. the aggregation of predictions from hypotheses or the concatenation of hypothesis text or embedding. We can achieve significant classification accuracy improvements over production-quality baselines on domain and intent classifications, 14% to 25% relative gains. The improvement is more significant for a subset of testing data where ASR first best is different from transcription. We also observe that with more hypotheses utilized, the performance can be further improved. In the future, we aim to employ additional features (e.g. confidence scores for hypotheses or tokens) to integrate $n$-bests more efficiently, where we can train a function $f$ to obtain a weight for each hypothesis embedding before pooling. Another direction is using deep learning framework to embed the word lattice BIBREF16 or confusion network BIBREF17, BIBREF18, which can provide a compact representation of multiple hypotheses and more information like times, in the SLU system.
Acknowledgements
We would like to thank Junghoo (John) Cho for proofreading. | perform experiments to utilize ASR $n$-best hypotheses during evaluation |
c9c85eee41556c6993f40e428fa607af4abe80a9 | c9c85eee41556c6993f40e428fa607af4abe80a9_0 | Q: Over which datasets/corpora is this work evaluated?
Text: Introduction
Currently, voice-controlled smart devices are widely used in multiple areas to fulfill various tasks, e.g. playing music, acquiring weather information and booking tickets. The SLU system employs several modules to enable the understanding of the semantics of the input speeches. When there is an incoming speech, the ASR module picks it up and attempts to transcribe the speech. An ASR model could generate multiple interpretations for most speeches, which can be ranked by their associated confidence scores. Among the $n$-best hypotheses, the top-1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification, intent classification and named entity recognition (slot tagging). Multi-domain NLU modules are usually designed hierarchically BIBREF0. For one incoming utterance, NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domain-specific.
In spite of impressive development on the current SLU pipeline, the interpretation of speech could still contain errors. Sometimes the top-1 recognition hypothesis of ASR module is ungrammatical or implausible and far from the ground-truth transcription BIBREF1, BIBREF2. Among those cases, we find one interpretation exact matching with or more similar to transcription can be included in the remaining hypotheses ($2^{nd}- n^{th}$).
To illustrate the value of the $2^{nd}- n^{th}$ hypotheses, we count the frequency of exact matching and more similar (smaller edit distance compared to the 1st hypothesis) to transcription for different positions of the $n$-best hypotheses list. Table TABREF1 exhibits the results. For the explored dataset, we only collect the top 5 interpretations for each utterance ($n = 5$). Notably, when the correct recognition exists among the 5 best hypotheses, 50% of the time (sum of the first row's percentages) it occurs among the $2^{nd}-5^{th}$ positions. Moreover, as shown by the second row in Table TABREF1, compared to the top recognition hypothesis, the other hypotheses can sometimes be more similar to the transcription.
Over the past few years, we have observed the success of reranking the $n$-best hypotheses BIBREF1, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 before feeding the best interpretation to the NLU module. These approaches propose the reranking framework by involving morphological, lexical or syntactic features BIBREF8, BIBREF9, BIBREF10, speech recognition features like confidence score BIBREF1, BIBREF4, and other features like number of tokens, rank position BIBREF1. They are effective to select the best from the hypotheses list and reduce the word error rate (WER) BIBREF11 of speech recognition.
Those reranking models could benefit the first two cases in Table TABREF2 when there is an utterance matching with transcription. However, in other cases like the third row, it is hard to integrate the fragmented information in multiple hypotheses.
This paper proposes various methods integrating $n$-best hypotheses to tackle the problem. To the best of our knowledge, this is the first study that attempts to collectively exploit the $n$-best speech interpretations in the SLU system. This paper serves as the basis of our $n$-best-hypotheses-based SLU system, focusing on the methods of integration for the hypotheses. Since further improvements of the integration framework require considerable setup and descriptions, where jointly optimized tasks (e.g. transcription reconstruction) trained with multiple ways (multitask BIBREF12, multistage learning BIBREF13) and more features (confidence score, rank position, etc.) are involved, we leave those to a subsequent article.
This paper is organized as follows. Section SECREF2 introduces the Baseline, Oracle and Direct models. Section SECREF3 describes proposed ways to integrate $n$-best hypotheses during training. The experimental setup and results are described in Section SECREF4. Section SECREF5 contains conclusions and future work.
Baseline, Oracle and Direct Models ::: Baseline and Oracle
The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.
For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\tilde{t})$, where $p(\tilde{t}) \leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\text{ASR}\ 1^{st}\ \text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription.
Baseline, Oracle and Direct Models ::: Direct Models
Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):
Majority Vote. We apply the BM model on each hypothesis independently and combine the predictions by picking the majority predicted label, i.e. Music.
Sort by Score. After parallel evaluation on all hypotheses, sort the prediction by the corresponding confidence score and choose the one with the highest score, i.e. Video.
Rerank (Oracle). Since the current rerank models (e.g., BIBREF1, BIBREF3, BIBREF4) attempt to select the hypothesis most similar to transcription, we propose the Rerank (Oracle), which picks the hypothesis with the smallest edit distance to transcription (assume it is the $a$-th best) during evaluation and uses its corresponding prediction.
Integration of N-BEST Hypotheses
All the above mentioned models apply the BM trained on one interpretation (transcription). Their abilities to take advantage of multiple interpretations are actually not trained. As a further step, we propose multiple ways to integrate the $n$-best hypotheses during training. The explored methods can be divided into two groups as shown in Fig. FIGREF11. Let $H_1, H_2,..., H_n $ denote all the hypotheses from ASR and $bp_{H_k, i} \in BPs$ denotes the $i$-th pair of bytes (BP) in the $k^{th}$ best hypothesis. The model parameters associated with the two possible ways both contain: embedding $e_{bp}$ for pairs of bytes, BiLSTM parameters $\theta $ and MLP parameters $W, b$.
Integration of N-BEST Hypotheses ::: Hypothesized Text Concatenation
The basic integration method (Combined Sentence) concatenates the $n$-best hypothesized text. We separate hypotheses with a special delimiter ($<$SEP$>$). We assume BPE totally produces $m$ BPs (delimiters are not split during encoding). Suppose the $n^{th}$ hypothesis has $j$ pairs. The entire model can be formulated as:
In Eqn. DISPLAY_FORM13, the connected hypotheses and separators are encoded via BiLSTM to a sequence of hidden state vectors. Each hidden state vector, e.g. $h_1$, is the concatenation of forward $h_{1f}$ and backward $h_{1b}$ states. The concatenation of the last state of the forward and backward LSTM forms the output vector of BiLSTM (concatenation denoted as $[,]$). Then, in Eqn. DISPLAY_FORM14, the MLP module defines the probability of a specific tag (domain or intent) $\tilde{t}$ as the normalized activation ($\sigma $) output after linear transformation of the output vector.
Integration of N-BEST Hypotheses ::: Hypothesis Embedding Concatenation
The concatenation of hypothesized text leverages the $n$-best list by transferring information among hypotheses in an embedding framework, BiLSTM. However, since all the layers have access to both the preceding and subsequent information, the embedding among $n$-bests will influence each other, which confuses the embedding and makes the whole framework sensitive to the noise in hypotheses.
As the second group of integration approaches, we develop models, PoolingAvg/Max, on the concatenation of hypothesis embedding, which isolate the embedding process among hypotheses and summarize the features by a pooling layer. For each hypothesis (e.g., $i^{th}$ best in Eqn. DISPLAY_FORM16 with $j$ pairs of bytes), we could get a sequence of hidden states from BiLSTM and obtain its final output state by concatenating the first and last hidden state ($h_{output_i}$ in Eqn. DISPLAY_FORM17). Then, we stack all the output states vertically as shown in Eqn. SECREF15. Note that in the real data, we will not always have a fixed size of hypotheses list. For a list with $r$ ($<n$) interpretations, we get the embedding for each of them and pad with the embedding of the first best hypothesis until a fixed size $n$. When $r\ge n$, we only stack the top $n$ embeddings. We employ $h_{output_1}$ for padding to enhance the influence of the top 1 hypothesis, which is more reliable. Finally, one unified representation could be achieved via Pooling (Max/Avg pooling with $n$ by 1 sliding window and stride 1) on the concatenation and one score could be produced per possible tag for the given task.
Experiment ::: Dataset
We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains.
Experiment ::: Performance on Entire Test Set
Table TABREF24 shows the relative error reduction (RErr) of Baseline, Oracle and our proposed models on the entire test set ($\sim $ 300K utterances) for multi-class domain classification. We can see among all the direct methods, predicting based on the hypothesis most similar to the transcription (Rerank (Oracle)) is the best.
As for the other models attempting to integrate the $n$-bests during training, PoolingAvg gets the highest relative improvement, 14.29%. It as well turns out that all the integration methods outperform direct models drastically. This shows that having access to $n$-best hypotheses during training is crucial for the quality of the predicted semantics.
Experiment ::: Performance Comparison among Various Subsets
To further detect the reason for improvements, we split the test set into two parts based on whether ASR first best agrees with transcription and evaluate separately. Comparing Table TABREF26 and Table TABREF27, obviously the benefits of using multiple hypotheses are mainly gained when ASR 1st best disagrees with the transcription. When ASR 1st best agrees with transcription, the proposed integration models can also keep the performance. Under that condition, we can still improve a little (3.56%) because, by introducing multiple ASR hypotheses, we could have more information and when the transcription/ASR 1st best does not appear in the training set's transcriptions, its $n$-bests list may have similar hypotheses included in the training set's $n$-bests. Then, our integration model trained on $n$-best hypotheses as well has clue to predict. The series of comparisons reveal that our approaches integrating the hypotheses are robust to the ASR errors and whenever the ASR model makes mistakes, we can outperform more significantly.
Experiment ::: Improvements on Different Domains and Different Numbers of Hypotheses
Among all the 23 domains, we choose 8 popular domains for further comparisons between the Baseline and the best model of Table TABREF24, PoolingAvg. Fig. FIGREF29 exhibits the results. We could find the PoolingAvg consistently improves the accuracy for all 8 domains.
In the previous experiments, the number of utilized hypotheses for each utterance during evaluation is five, which means we use the top 5 interpretations when the size of ASR recognition list is not smaller than 5 and use all the interpretations otherwise. Changing the number of hypotheses while evaluation, Fig. FIGREF30 shows a monotonic increase with the access to more hypotheses for the PoolingAvg and PoolingMax (Sort by Score is shown because it is the best achievable direct model while the Rerank (Oracle) is not realistic). The growth becomes gentle after four hypotheses are leveraged.
Experiment ::: Intent Classification
Since another downstream task, intent classification, is similar to domain classification, we just show the best model in domain classification, PoolingAvg, on domain-specific intent classification for three popular domains due to space limit. As Table TABREF32 shows, the margins of using multiple hypotheses with PoolingAvg are significant as well.
Conclusions and Future Work
This paper improves the SLU system robustness to ASR errors by integrating $n$-best hypotheses in different ways, e.g. the aggregation of predictions from hypotheses or the concatenation of hypothesis text or embedding. We can achieve significant classification accuracy improvements over production-quality baselines on domain and intent classifications, 14% to 25% relative gains. The improvement is more significant for a subset of testing data where ASR first best is different from transcription. We also observe that with more hypotheses utilized, the performance can be further improved. In the future, we aim to employ additional features (e.g. confidence scores for hypotheses or tokens) to integrate $n$-bests more efficiently, where we can train a function $f$ to obtain a weight for each hypothesis embedding before pooling. Another direction is using deep learning framework to embed the word lattice BIBREF16 or confusion network BIBREF17, BIBREF18, which can provide a compact representation of multiple hypotheses and more information like times, in the SLU system.
Acknowledgements
We would like to thank Junghoo (John) Cho for proofreading. | $\sim $ 8.7M annotated anonymised user utterances |
c9c85eee41556c6993f40e428fa607af4abe80a9 | c9c85eee41556c6993f40e428fa607af4abe80a9_1 | Q: Over which datasets/corpora is this work evaluated?
Text: Introduction
Currently, voice-controlled smart devices are widely used in multiple areas to fulfill various tasks, e.g. playing music, acquiring weather information and booking tickets. The SLU system employs several modules to enable the understanding of the semantics of the input speeches. When there is an incoming speech, the ASR module picks it up and attempts to transcribe the speech. An ASR model could generate multiple interpretations for most speeches, which can be ranked by their associated confidence scores. Among the $n$-best hypotheses, the top-1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification, intent classification and named entity recognition (slot tagging). Multi-domain NLU modules are usually designed hierarchically BIBREF0. For one incoming utterance, NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domain-specific.
In spite of impressive development on the current SLU pipeline, the interpretation of speech could still contain errors. Sometimes the top-1 recognition hypothesis of ASR module is ungrammatical or implausible and far from the ground-truth transcription BIBREF1, BIBREF2. Among those cases, we find one interpretation exact matching with or more similar to transcription can be included in the remaining hypotheses ($2^{nd}- n^{th}$).
To illustrate the value of the $2^{nd}- n^{th}$ hypotheses, we count the frequency of exact matching and more similar (smaller edit distance compared to the 1st hypothesis) to transcription for different positions of the $n$-best hypotheses list. Table TABREF1 exhibits the results. For the explored dataset, we only collect the top 5 interpretations for each utterance ($n = 5$). Notably, when the correct recognition exists among the 5 best hypotheses, 50% of the time (sum of the first row's percentages) it occurs among the $2^{nd}-5^{th}$ positions. Moreover, as shown by the second row in Table TABREF1, compared to the top recognition hypothesis, the other hypotheses can sometimes be more similar to the transcription.
Over the past few years, we have observed the success of reranking the $n$-best hypotheses BIBREF1, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 before feeding the best interpretation to the NLU module. These approaches propose the reranking framework by involving morphological, lexical or syntactic features BIBREF8, BIBREF9, BIBREF10, speech recognition features like confidence score BIBREF1, BIBREF4, and other features like number of tokens, rank position BIBREF1. They are effective to select the best from the hypotheses list and reduce the word error rate (WER) BIBREF11 of speech recognition.
Those reranking models could benefit the first two cases in Table TABREF2 when there is an utterance matching with transcription. However, in other cases like the third row, it is hard to integrate the fragmented information in multiple hypotheses.
This paper proposes various methods integrating $n$-best hypotheses to tackle the problem. To the best of our knowledge, this is the first study that attempts to collectively exploit the $n$-best speech interpretations in the SLU system. This paper serves as the basis of our $n$-best-hypotheses-based SLU system, focusing on the methods of integration for the hypotheses. Since further improvements of the integration framework require considerable setup and descriptions, where jointly optimized tasks (e.g. transcription reconstruction) trained with multiple ways (multitask BIBREF12, multistage learning BIBREF13) and more features (confidence score, rank position, etc.) are involved, we leave those to a subsequent article.
This paper is organized as follows. Section SECREF2 introduces the Baseline, Oracle and Direct models. Section SECREF3 describes proposed ways to integrate $n$-best hypotheses during training. The experimental setup and results are described in Section SECREF4. Section SECREF5 contains conclusions and future work.
Baseline, Oracle and Direct Models ::: Baseline and Oracle
The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.
For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\tilde{t})$, where $p(\tilde{t}) \leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\text{ASR}\ 1^{st}\ \text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription.
Baseline, Oracle and Direct Models ::: Direct Models
Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):
Majority Vote. We apply the BM model on each hypothesis independently and combine the predictions by picking the majority predicted label, i.e. Music.
Sort by Score. After parallel evaluation on all hypotheses, sort the prediction by the corresponding confidence score and choose the one with the highest score, i.e. Video.
Rerank (Oracle). Since the current rerank models (e.g., BIBREF1, BIBREF3, BIBREF4) attempt to select the hypothesis most similar to transcription, we propose the Rerank (Oracle), which picks the hypothesis with the smallest edit distance to transcription (assume it is the $a$-th best) during evaluation and uses its corresponding prediction.
Integration of N-BEST Hypotheses
All the above mentioned models apply the BM trained on one interpretation (transcription). Their abilities to take advantage of multiple interpretations are actually not trained. As a further step, we propose multiple ways to integrate the $n$-best hypotheses during training. The explored methods can be divided into two groups as shown in Fig. FIGREF11. Let $H_1, H_2,..., H_n $ denote all the hypotheses from ASR and $bp_{H_k, i} \in BPs$ denotes the $i$-th pair of bytes (BP) in the $k^{th}$ best hypothesis. The model parameters associated with the two possible ways both contain: embedding $e_{bp}$ for pairs of bytes, BiLSTM parameters $\theta $ and MLP parameters $W, b$.
Integration of N-BEST Hypotheses ::: Hypothesized Text Concatenation
The basic integration method (Combined Sentence) concatenates the $n$-best hypothesized text. We separate hypotheses with a special delimiter ($<$SEP$>$). We assume BPE totally produces $m$ BPs (delimiters are not split during encoding). Suppose the $n^{th}$ hypothesis has $j$ pairs. The entire model can be formulated as:
In Eqn. DISPLAY_FORM13, the connected hypotheses and separators are encoded via BiLSTM to a sequence of hidden state vectors. Each hidden state vector, e.g. $h_1$, is the concatenation of forward $h_{1f}$ and backward $h_{1b}$ states. The concatenation of the last state of the forward and backward LSTM forms the output vector of BiLSTM (concatenation denoted as $[,]$). Then, in Eqn. DISPLAY_FORM14, the MLP module defines the probability of a specific tag (domain or intent) $\tilde{t}$ as the normalized activation ($\sigma $) output after linear transformation of the output vector.
Integration of N-BEST Hypotheses ::: Hypothesis Embedding Concatenation
The concatenation of hypothesized text leverages the $n$-best list by transferring information among hypotheses in an embedding framework, BiLSTM. However, since all the layers have access to both the preceding and subsequent information, the embedding among $n$-bests will influence each other, which confuses the embedding and makes the whole framework sensitive to the noise in hypotheses.
As the second group of integration approaches, we develop models, PoolingAvg/Max, on the concatenation of hypothesis embedding, which isolate the embedding process among hypotheses and summarize the features by a pooling layer. For each hypothesis (e.g., $i^{th}$ best in Eqn. DISPLAY_FORM16 with $j$ pairs of bytes), we could get a sequence of hidden states from BiLSTM and obtain its final output state by concatenating the first and last hidden state ($h_{output_i}$ in Eqn. DISPLAY_FORM17). Then, we stack all the output states vertically as shown in Eqn. SECREF15. Note that in the real data, we will not always have a fixed size of hypotheses list. For a list with $r$ ($<n$) interpretations, we get the embedding for each of them and pad with the embedding of the first best hypothesis until a fixed size $n$. When $r\ge n$, we only stack the top $n$ embeddings. We employ $h_{output_1}$ for padding to enhance the influence of the top 1 hypothesis, which is more reliable. Finally, one unified representation could be achieved via Pooling (Max/Avg pooling with $n$ by 1 sliding window and stride 1) on the concatenation and one score could be produced per possible tag for the given task.
Experiment ::: Dataset
We conduct our experiments on $\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains.
Experiment ::: Performance on Entire Test Set
Table TABREF24 shows the relative error reduction (RErr) of Baseline, Oracle and our proposed models on the entire test set ($\sim $ 300K utterances) for multi-class domain classification. We can see among all the direct methods, predicting based on the hypothesis most similar to the transcription (Rerank (Oracle)) is the best.
As for the other models attempting to integrate the $n$-bests during training, PoolingAvg gets the highest relative improvement, 14.29%. It as well turns out that all the integration methods outperform direct models drastically. This shows that having access to $n$-best hypotheses during training is crucial for the quality of the predicted semantics.
Experiment ::: Performance Comparison among Various Subsets
To further detect the reason for improvements, we split the test set into two parts based on whether ASR first best agrees with transcription and evaluate separately. Comparing Table TABREF26 and Table TABREF27, obviously the benefits of using multiple hypotheses are mainly gained when ASR 1st best disagrees with the transcription. When ASR 1st best agrees with transcription, the proposed integration models can also keep the performance. Under that condition, we can still improve a little (3.56%) because, by introducing multiple ASR hypotheses, we could have more information and when the transcription/ASR 1st best does not appear in the training set's transcriptions, its $n$-bests list may have similar hypotheses included in the training set's $n$-bests. Then, our integration model trained on $n$-best hypotheses as well has clue to predict. The series of comparisons reveal that our approaches integrating the hypotheses are robust to the ASR errors and whenever the ASR model makes mistakes, we can outperform more significantly.
Experiment ::: Improvements on Different Domains and Different Numbers of Hypotheses
Among all the 23 domains, we choose 8 popular domains for further comparisons between the Baseline and the best model of Table TABREF24, PoolingAvg. Fig. FIGREF29 exhibits the results. We could find the PoolingAvg consistently improves the accuracy for all 8 domains.
In the previous experiments, the number of utilized hypotheses for each utterance during evaluation is five, which means we use the top 5 interpretations when the size of ASR recognition list is not smaller than 5 and use all the interpretations otherwise. Changing the number of hypotheses while evaluation, Fig. FIGREF30 shows a monotonic increase with the access to more hypotheses for the PoolingAvg and PoolingMax (Sort by Score is shown because it is the best achievable direct model while the Rerank (Oracle) is not realistic). The growth becomes gentle after four hypotheses are leveraged.
Experiment ::: Intent Classification
Since another downstream task, intent classification, is similar to domain classification, we just show the best model in domain classification, PoolingAvg, on domain-specific intent classification for three popular domains due to space limit. As Table TABREF32 shows, the margins of using multiple hypotheses with PoolingAvg are significant as well.
Conclusions and Future Work
This paper improves the SLU system robustness to ASR errors by integrating $n$-best hypotheses in different ways, e.g. the aggregation of predictions from hypotheses or the concatenation of hypothesis text or embedding. We can achieve significant classification accuracy improvements over production-quality baselines on domain and intent classifications, 14% to 25% relative gains. The improvement is more significant for a subset of testing data where ASR first best is different from transcription. We also observe that with more hypotheses utilized, the performance can be further improved. In the future, we aim to employ additional features (e.g. confidence scores for hypotheses or tokens) to integrate $n$-bests more efficiently, where we can train a function $f$ to obtain a weight for each hypothesis embedding before pooling. Another direction is using deep learning framework to embed the word lattice BIBREF16 or confusion network BIBREF17, BIBREF18, which can provide a compact representation of multiple hypotheses and more information like times, in the SLU system.
Acknowledgements
We would like to thank Junghoo (John) Cho for proofreading. | on $\sim $ 8.7M annotated anonymised user utterances |
982979cb3c71770d8d7d2d1be8f92b66223dec85 | 982979cb3c71770d8d7d2d1be8f92b66223dec85_0 | Q: What new metrics are suggested to track progress?
Text: Introduction
Word embeddings have great practical importance since they can be used as pre-computed high-density features to ML models, significantly reducing the amount of training data required in a variety of NLP tasks. However, there are several inter-related challenges with computing and consistently distributing word embeddings concerning the:
Not only the space of possibilities for each of these aspects is large, there are also challenges in performing a consistent large-scale evaluation of the resulting embeddings BIBREF0 . This makes systematic experimentation of alternative word-embedding configurations extremely difficult.
In this work, we make progress in trying to find good combinations of some of the previous parameters. We focus specifically in the task of computing word embeddings for processing the Portuguese Twitter stream. User-generated content (such as twitter messages) tends to be populated by words that are specific to the medium, and that are constantly being added by users. These dynamics pose challenges to NLP systems, which have difficulties in dealing with out of vocabulary words. Therefore, learning a semantic representation for those words directly from the user-generated stream - and as the words arise - would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words.
Starting from our own implementation of a neural word embedding model, which should be seen as a flexible baseline model for further experimentation, our research tries to answer the following practical questions:
By answering these questions based on a reasonably small sample of Twitter data (5M), we hope to find the best way to proceed and train embeddings for Twitter vocabulary using the much larger amount of Twitter data available (300M), but for which parameter experimentation would be unfeasible. This work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a large-scale database of embeddings for processing Portuguese Twitter data.
Related Work
There are several approaches to generating word embeddings. One can build models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe BIBREF1 , BIBREF2 , or one can extract such embeddings as by-products of more general models, which implicitly compute such word embeddings in the process of solving other language tasks.
Word embeddings methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional bag-of-words models. Moreover, this low dimensional space is able to capture lexical and semantic properties of words. Co-occurrence statistics are the fundamental information that allows creating such representations. Two approaches exist for building word embeddings. One creates a low rank approximation of the word co-occurrence matrix, such as in the case of Latent Semantic Analysis BIBREF3 and GloVe BIBREF2 . The other approach consists in extracting internal representations from neural network models of text BIBREF4 , BIBREF5 , BIBREF1 . Levy and Goldberg BIBREF6 showed that the two approaches are closely related.
Although, word embeddings research go back several decades, it was the recent developments of Deep Learning and the word2vec framework BIBREF1 that captured the attention of the NLP community. Moreover, Mikolov et al. BIBREF7 showed that embeddings trained using word2vec models (CBOW and Skip-gram) exhibit linear structure, allowing analogy questions of the form “man:woman::king:??.” and can boost performance of several text classification tasks.
One of the issues of recent work in training word embeddings is the variability of experimental setups reported. For instance, in the paper describing GloVe BIBREF2 authors trained their model on five corpora of different sizes and built a vocabulary of 400K most frequent words. Mikolov et al. BIBREF7 trained with 82K vocabulary while Mikolov et al. BIBREF1 was trained with 3M vocabulary. Recently, Arora et al. BIBREF8 proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models (e.g. word2vec and GloVe) and some hyper parameter choices. Authors evaluated their model using 68K vocabulary.
SemEval 2016-Task 4: Sentiment Analysis in Twitter organizers report that participants either used general purpose pre-trained word embeddings, or trained from Tweet 2016 dataset or “from some sort of dataset” BIBREF9 . However, participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results.
Recently, Rodrigues et al. BIBREF10 created and distributed the first general purpose embeddings for Portuguese. Word2vec gensim implementation was used and authors report results with different values for the parameters of the framework. Furthermore, authors used experts to translate well established word embeddings test sets for Portuguese language, which they also made publicly available and we use some of those in this work.
Our Neural Word Embedding Model
The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 , but ours is one layer deeper and is set to solve a slightly different word prediction task. Given a sequence of 5 words - INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 , the task the model tries to perform is that of predicting the middle word, INLINEFORM5 , based on the two words on the left - INLINEFORM6 INLINEFORM7 - and the two words on the right - INLINEFORM8 INLINEFORM9 : INLINEFORM10 . This should produce embeddings that closely capture distributional similarity, so that words that belong to the same semantic class, or which are synonyms and antonyms of each other, will be embedded in “close” regions of the embedding hyper-space.
Our neural model is composed of the following layers:
All neural activations in the model are sigmoid functions. The model was implemented using the Syntagma library which relies on Keras BIBREF11 for model development, and we train the model using the built-in ADAM BIBREF12 optimizer with the default parameters.
Experimental Setup
We are interested in assessing two aspects of the word embedding process. On one hand, we wish to evaluate the semantic quality of the produced embeddings. On the other, we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary INLINEFORM0 we try to embed. These aspects have fundamental practical importance for deciding how we should attempt to produce the large-scale database of embeddings we will provide in the future. All resources developed in this work are publicly available.
Apart from the size of the vocabulary to be processed ( INLINEFORM0 ), the hyperparamaters of the model that we could potentially explore are i) the dimensionality of the input word embeddings and ii) the dimensionality of the output word embeddings. As mentioned before, we set both to 64 bits after performing some quick manual experimentation. Full hyperparameter exploration is left for future work.
Our experimental testbed comprises a desktop with a nvidia TITAN X (Pascal), Intel Core Quad i7 3770K 3.5Ghz, 32 GB DDR3 RAM and a 180GB SSD drive.
Training Data
We randomly sampled 5M tweets from a corpus of 300M tweets collected from the Portuguese Twitter community BIBREF13 . The 5M comprise a total of 61.4M words (approx. 12 words per tweets in average). From those 5M tweets we generated a database containing 18.9M distinct 5-grams, along with their frequency counts. In this process, all text was down-cased. To help anonymizing the n-gram information, we substituted all the twitter handles by an artificial token “T_HANDLE". We also substituted all HTTP links by the token “LINK". We prepended two special tokens to complete the 5-grams generated from the first two words of the tweet, and we correspondingly appended two other special tokens to complete 5-grams centered around the two last tokens of the tweet.
Tokenization was perform by trivially separating tokens by blank space. No linguistic pre-processing, such as for example separating punctuation from words, was made. We opted for not doing any pre-processing for not introducing any linguistic bias from another tool (tokenization of user generated content is not a trivial problem). The most direct consequence of not performing any linguistic pre-processing is that of increasing the vocabulary size and diluting token counts. However, in principle, and given enough data, the embedding model should be able to learn the correct embeddings for both actual words (e.g. “ronaldo") and the words that have punctuation attached (e.g. “ronaldo!"). In practice, we believe that this can actually be an advantage for the downstream consumers of the embeddings, since they can also relax the requirements of their own tokenization stage. Overall, the dictionary thus produced contains approximately 1.3M distinct entries. Our dictionary was sorted by frequency, so the words with lowest index correspond to the most common words in the corpus.
We used the information from the 5-gram database to generate all training data used in the experiments. For a fixed size INLINEFORM0 of the target vocabulary to be embedded (e.g. INLINEFORM1 = 2048), we scanned the database to obtain all possible 5-grams for which all tokens were among the top INLINEFORM2 words of the dictionary (i.e. the top INLINEFORM3 most frequent words in the corpus). Depending on INLINEFORM4 , different numbers of valid training 5-grams were found in the database: the larger INLINEFORM5 the more valid 5-grams would pass the filter. The number of examples collected for each of the values of INLINEFORM6 is shown in Table TABREF16 .
Since one of the goals of our experiments is to understand the impact of using different amounts of training data, for each size of vocabulary to be embedded INLINEFORM0 we will run experiments training the models using 25%, 50%, 75% and 100% of the data available.
Metrics related with the Learning Process
We tracked metrics related to the learning process itself, as a function of the vocabulary size to be embedded INLINEFORM0 and of the fraction of training data used (25%, 50%, 75% and 100%). For all possible configurations, we recorded the values of the training and validation loss (cross entropy) after each epoch. Tracking these metrics serves as a minimalistic sanity check: if the model is not able to solve the word prediction task with some degree of success (e.g. if we observe no substantial decay in the losses) then one should not expect the embeddings to capture any of the distributional information they are supposed to capture.
Tests and Gold-Standard Data for Intrinsic Evaluation
Using the gold standard data (described below), we performed three types of tests:
Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year", “Portuguese Cities", “Smileys") should be close, since they are supposed to be found in mostly the same contexts.
Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.
Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque" abbreviated by “pq") and partial references (e.g. “slb and benfica") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).
Therefore, in our tests, two words are considered:
distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).
to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).
equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95).
We report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space, as observed in related work BIBREF14 , BIBREF15 . We used the following sources of data for testing Class Membership:
AP+Battig data. This data was collected from the evaluation data provided by BIBREF10 . These correspond to 29 semantic classes.
Twitter-Class - collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes. These include the following 6 sets (number of elements in brackets): smileys (13), months (12), countries (6), names (19), surnames (14) Portuguese cities (9).
For the Class Distinction test, we pair each element of each of the gold standard classes, with all the other elements from other classes (removing duplicate pairs since ordering does not matter), and we generate pairs of words which are supposed belong to different classes. For Word Equivalence test, we manually collected equivalente pairs, focusing on abbreviations that are popular in Twitters (e.g. “qt" INLINEFORM0 “quanto" or “lx" INLINEFORM1 “lisboa" and on frequent acronyms (e.g. “slb" INLINEFORM2 “benfica"). In total, we compiled 48 equivalence pairs.
For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95).
It is worth making a final comment about the gold standard data. Although we do not expect this gold standard data to be sufficient for a wide-spectrum evaluation of the resulting embeddings, it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics, and where it is not. These should still provide valuable indications for planning how to produce the much larger database of word embeddings.
Results and Analysis
We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded, and the volume of training data available that has been used. Table TABREF27 presents some overall statistics after training for 40 epochs.
The average time per epoch increases first with the size of the vocabulary to embed INLINEFORM0 (because the model will have more parameters), and then, for each INLINEFORM1 , with the volume of training data. Using our testbed (Section SECREF4 ), the total time of learning in our experiments varied from a minimum of 160 seconds, with INLINEFORM2 = 2048 and 25% of data, to a maximum of 22.5 hours, with INLINEFORM3 = 32768 and using 100% of the training data available (extracted from 5M tweets). These numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete Twitter corpus we have, consisting of 300M tweets.
We now analyze the learning process itself. We plot the training set loss and validation set loss for the different values of INLINEFORM0 (Figure FIGREF28 left) with 40 epochs and using all the available data. As expected, the loss is reducing after each epoch, with validation loss, although being slightly higher, following the same trend. When using 100% we see no model overfitting. We can also observe that the higher is INLINEFORM1 the higher are the absolute values of the loss sets. This is not surprising because as the number of words to predict becomes higher the problem will tend to become harder. Also, because we keep the dimensionality of the embedding space constant (64 dimensions), it becomes increasingly hard to represent and differentiate larger vocabularies in the same hyper-volume. We believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute.
On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model.
Intrinsic Evaluation
Table TABREF30 presents results for the three different tests described in Section SECREF4 . The first (expected) result is that the coverage metrics increase with the size of the vocabulary being embedded, i.e., INLINEFORM0 . Because the Word Equivalence test set was specifically created for evaluating Twitter-based embedding, when embedding INLINEFORM1 = 32768 words we achieve almost 90% test coverage. On the other hand, for the Class Distinction test set - which was created by doing the cross product of the test cases of each class in Class Membership test set - we obtain very low coverage figures. This indicates that it is not always possible to re-use previously compiled gold-standard data, and that it will be important to compile gold-standard data directly from Twitter content if we want to perform a more precise evaluation.
The effect of varying the cosine similarity decision threshold from 0.70 to 0.80 for Class Membership test shows that the percentage of classified as correct test cases drops significantly. However, the drop is more accentuated when training with only a portion of the available data. The differences of using two alternative thresholds values is even higher in the Word Equivalence test.
The Word Equivalence test, in which we consider two words equivalent word if the cosine of the embedding vectors is higher than 0.95, revealed to be an extremely demanding test. Nevertheless, for INLINEFORM0 = 32768 the results are far superior, and for a much larger coverage, than for lower INLINEFORM1 . The same happens with the Class Membership test.
On the other hand, the Class Distinction test shows a different trend for larger values of INLINEFORM0 = 32768 but the coverage for other values of INLINEFORM1 is so low that becomes difficult to hypothesize about the reduced values of True Negatives (TN) percentage obtained for the largest INLINEFORM2 . It would be necessary to confirm this behavior with even larger values of INLINEFORM3 . One might hypothesize that the ability to distinguish between classes requires larger thresholds when INLINEFORM4 is large. Also, we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words.
Further Analysis regarding Evaluation Metrics
Despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available, these results also revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds. This sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding hyper-parameters, which was also observed in other recent works BIBREF15 .
By using these absolute thresholds as criteria for deciding similarity of words, we create a dependency between the evaluation metrics and the geometry of the embedded data. If we see the embedding data as a graph, this means that metrics will change if we apply scaling operations to certain parts of the graph, even if its structure (i.e. relative position of the embedded words) does not change.
For most practical purposes (including training downstream ML models) absolute distances have little meaning. What is fundamental is that the resulting embeddings are able to capture topological information: similar words should be closer to each other than they are to words that are dissimilar to them (under the various criteria of similarity we care about), independently of the absolute distances involved.
It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics.
Conclusions
Producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium. We implemented a neural word embedding model that embeds words based on n-gram information extracted from a sample of the Portuguese Twitter stream, and which can be seen as a flexible baseline for further experiments in the field. Work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from Twitter and adequate evaluation tests and gold-standard data.
Results show that using less than 50% of the available training examples for each vocabulary size might result in overfitting. The resulting embeddings obtain an interesting performance on intrinsic evaluation tests when trained a vocabulary containing the 32768 most frequent words in a Twitter sample of relatively small size. Nevertheless, results exhibit a skewness in the cosine similarity scores that should be further explored in future work. More specifically, the Class Distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values.
Therefore, a key area of future exploration has to do with better evaluation resources and metrics. We made some initial effort in this front. However, we believe that developing new intrinsic tests, agnostic to absolute values of metrics and concerned with topological aspects of the embedding space, and expanding gold-standard data with cases tailored for user-generated content, is of fundamental importance for the progress of this line of work.
Furthermore, we plan to make public available word embeddings trained from a large sample of 300M tweets collected from the Portuguese Twitter stream. This will require experimenting producing embeddings with higher dimensionality (to avoid the cosine skewness effect) and training with even larger vocabularies. Also, there is room for experimenting with some of the hyper-parameters of the model itself (e.g. activation functions, dimensions of the layers), which we know have impact on final results. | For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine |
5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212 | 5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212_0 | Q: What intrinsic evaluation metrics are used?
Text: Introduction
Word embeddings have great practical importance since they can be used as pre-computed high-density features to ML models, significantly reducing the amount of training data required in a variety of NLP tasks. However, there are several inter-related challenges with computing and consistently distributing word embeddings concerning the:
Not only the space of possibilities for each of these aspects is large, there are also challenges in performing a consistent large-scale evaluation of the resulting embeddings BIBREF0 . This makes systematic experimentation of alternative word-embedding configurations extremely difficult.
In this work, we make progress in trying to find good combinations of some of the previous parameters. We focus specifically in the task of computing word embeddings for processing the Portuguese Twitter stream. User-generated content (such as twitter messages) tends to be populated by words that are specific to the medium, and that are constantly being added by users. These dynamics pose challenges to NLP systems, which have difficulties in dealing with out of vocabulary words. Therefore, learning a semantic representation for those words directly from the user-generated stream - and as the words arise - would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words.
Starting from our own implementation of a neural word embedding model, which should be seen as a flexible baseline model for further experimentation, our research tries to answer the following practical questions:
By answering these questions based on a reasonably small sample of Twitter data (5M), we hope to find the best way to proceed and train embeddings for Twitter vocabulary using the much larger amount of Twitter data available (300M), but for which parameter experimentation would be unfeasible. This work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a large-scale database of embeddings for processing Portuguese Twitter data.
Related Work
There are several approaches to generating word embeddings. One can build models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe BIBREF1 , BIBREF2 , or one can extract such embeddings as by-products of more general models, which implicitly compute such word embeddings in the process of solving other language tasks.
Word embeddings methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional bag-of-words models. Moreover, this low dimensional space is able to capture lexical and semantic properties of words. Co-occurrence statistics are the fundamental information that allows creating such representations. Two approaches exist for building word embeddings. One creates a low rank approximation of the word co-occurrence matrix, such as in the case of Latent Semantic Analysis BIBREF3 and GloVe BIBREF2 . The other approach consists in extracting internal representations from neural network models of text BIBREF4 , BIBREF5 , BIBREF1 . Levy and Goldberg BIBREF6 showed that the two approaches are closely related.
Although, word embeddings research go back several decades, it was the recent developments of Deep Learning and the word2vec framework BIBREF1 that captured the attention of the NLP community. Moreover, Mikolov et al. BIBREF7 showed that embeddings trained using word2vec models (CBOW and Skip-gram) exhibit linear structure, allowing analogy questions of the form “man:woman::king:??.” and can boost performance of several text classification tasks.
One of the issues of recent work in training word embeddings is the variability of experimental setups reported. For instance, in the paper describing GloVe BIBREF2 authors trained their model on five corpora of different sizes and built a vocabulary of 400K most frequent words. Mikolov et al. BIBREF7 trained with 82K vocabulary while Mikolov et al. BIBREF1 was trained with 3M vocabulary. Recently, Arora et al. BIBREF8 proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models (e.g. word2vec and GloVe) and some hyper parameter choices. Authors evaluated their model using 68K vocabulary.
SemEval 2016-Task 4: Sentiment Analysis in Twitter organizers report that participants either used general purpose pre-trained word embeddings, or trained from Tweet 2016 dataset or “from some sort of dataset” BIBREF9 . However, participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results.
Recently, Rodrigues et al. BIBREF10 created and distributed the first general purpose embeddings for Portuguese. Word2vec gensim implementation was used and authors report results with different values for the parameters of the framework. Furthermore, authors used experts to translate well established word embeddings test sets for Portuguese language, which they also made publicly available and we use some of those in this work.
Our Neural Word Embedding Model
The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 , but ours is one layer deeper and is set to solve a slightly different word prediction task. Given a sequence of 5 words - INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 , the task the model tries to perform is that of predicting the middle word, INLINEFORM5 , based on the two words on the left - INLINEFORM6 INLINEFORM7 - and the two words on the right - INLINEFORM8 INLINEFORM9 : INLINEFORM10 . This should produce embeddings that closely capture distributional similarity, so that words that belong to the same semantic class, or which are synonyms and antonyms of each other, will be embedded in “close” regions of the embedding hyper-space.
Our neural model is composed of the following layers:
All neural activations in the model are sigmoid functions. The model was implemented using the Syntagma library which relies on Keras BIBREF11 for model development, and we train the model using the built-in ADAM BIBREF12 optimizer with the default parameters.
Experimental Setup
We are interested in assessing two aspects of the word embedding process. On one hand, we wish to evaluate the semantic quality of the produced embeddings. On the other, we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary INLINEFORM0 we try to embed. These aspects have fundamental practical importance for deciding how we should attempt to produce the large-scale database of embeddings we will provide in the future. All resources developed in this work are publicly available.
Apart from the size of the vocabulary to be processed ( INLINEFORM0 ), the hyperparamaters of the model that we could potentially explore are i) the dimensionality of the input word embeddings and ii) the dimensionality of the output word embeddings. As mentioned before, we set both to 64 bits after performing some quick manual experimentation. Full hyperparameter exploration is left for future work.
Our experimental testbed comprises a desktop with a nvidia TITAN X (Pascal), Intel Core Quad i7 3770K 3.5Ghz, 32 GB DDR3 RAM and a 180GB SSD drive.
Training Data
We randomly sampled 5M tweets from a corpus of 300M tweets collected from the Portuguese Twitter community BIBREF13 . The 5M comprise a total of 61.4M words (approx. 12 words per tweets in average). From those 5M tweets we generated a database containing 18.9M distinct 5-grams, along with their frequency counts. In this process, all text was down-cased. To help anonymizing the n-gram information, we substituted all the twitter handles by an artificial token “T_HANDLE". We also substituted all HTTP links by the token “LINK". We prepended two special tokens to complete the 5-grams generated from the first two words of the tweet, and we correspondingly appended two other special tokens to complete 5-grams centered around the two last tokens of the tweet.
Tokenization was perform by trivially separating tokens by blank space. No linguistic pre-processing, such as for example separating punctuation from words, was made. We opted for not doing any pre-processing for not introducing any linguistic bias from another tool (tokenization of user generated content is not a trivial problem). The most direct consequence of not performing any linguistic pre-processing is that of increasing the vocabulary size and diluting token counts. However, in principle, and given enough data, the embedding model should be able to learn the correct embeddings for both actual words (e.g. “ronaldo") and the words that have punctuation attached (e.g. “ronaldo!"). In practice, we believe that this can actually be an advantage for the downstream consumers of the embeddings, since they can also relax the requirements of their own tokenization stage. Overall, the dictionary thus produced contains approximately 1.3M distinct entries. Our dictionary was sorted by frequency, so the words with lowest index correspond to the most common words in the corpus.
We used the information from the 5-gram database to generate all training data used in the experiments. For a fixed size INLINEFORM0 of the target vocabulary to be embedded (e.g. INLINEFORM1 = 2048), we scanned the database to obtain all possible 5-grams for which all tokens were among the top INLINEFORM2 words of the dictionary (i.e. the top INLINEFORM3 most frequent words in the corpus). Depending on INLINEFORM4 , different numbers of valid training 5-grams were found in the database: the larger INLINEFORM5 the more valid 5-grams would pass the filter. The number of examples collected for each of the values of INLINEFORM6 is shown in Table TABREF16 .
Since one of the goals of our experiments is to understand the impact of using different amounts of training data, for each size of vocabulary to be embedded INLINEFORM0 we will run experiments training the models using 25%, 50%, 75% and 100% of the data available.
Metrics related with the Learning Process
We tracked metrics related to the learning process itself, as a function of the vocabulary size to be embedded INLINEFORM0 and of the fraction of training data used (25%, 50%, 75% and 100%). For all possible configurations, we recorded the values of the training and validation loss (cross entropy) after each epoch. Tracking these metrics serves as a minimalistic sanity check: if the model is not able to solve the word prediction task with some degree of success (e.g. if we observe no substantial decay in the losses) then one should not expect the embeddings to capture any of the distributional information they are supposed to capture.
Tests and Gold-Standard Data for Intrinsic Evaluation
Using the gold standard data (described below), we performed three types of tests:
Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year", “Portuguese Cities", “Smileys") should be close, since they are supposed to be found in mostly the same contexts.
Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.
Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque" abbreviated by “pq") and partial references (e.g. “slb and benfica") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).
Therefore, in our tests, two words are considered:
distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).
to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).
equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95).
We report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space, as observed in related work BIBREF14 , BIBREF15 . We used the following sources of data for testing Class Membership:
AP+Battig data. This data was collected from the evaluation data provided by BIBREF10 . These correspond to 29 semantic classes.
Twitter-Class - collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes. These include the following 6 sets (number of elements in brackets): smileys (13), months (12), countries (6), names (19), surnames (14) Portuguese cities (9).
For the Class Distinction test, we pair each element of each of the gold standard classes, with all the other elements from other classes (removing duplicate pairs since ordering does not matter), and we generate pairs of words which are supposed belong to different classes. For Word Equivalence test, we manually collected equivalente pairs, focusing on abbreviations that are popular in Twitters (e.g. “qt" INLINEFORM0 “quanto" or “lx" INLINEFORM1 “lisboa" and on frequent acronyms (e.g. “slb" INLINEFORM2 “benfica"). In total, we compiled 48 equivalence pairs.
For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95).
It is worth making a final comment about the gold standard data. Although we do not expect this gold standard data to be sufficient for a wide-spectrum evaluation of the resulting embeddings, it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics, and where it is not. These should still provide valuable indications for planning how to produce the much larger database of word embeddings.
Results and Analysis
We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded, and the volume of training data available that has been used. Table TABREF27 presents some overall statistics after training for 40 epochs.
The average time per epoch increases first with the size of the vocabulary to embed INLINEFORM0 (because the model will have more parameters), and then, for each INLINEFORM1 , with the volume of training data. Using our testbed (Section SECREF4 ), the total time of learning in our experiments varied from a minimum of 160 seconds, with INLINEFORM2 = 2048 and 25% of data, to a maximum of 22.5 hours, with INLINEFORM3 = 32768 and using 100% of the training data available (extracted from 5M tweets). These numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete Twitter corpus we have, consisting of 300M tweets.
We now analyze the learning process itself. We plot the training set loss and validation set loss for the different values of INLINEFORM0 (Figure FIGREF28 left) with 40 epochs and using all the available data. As expected, the loss is reducing after each epoch, with validation loss, although being slightly higher, following the same trend. When using 100% we see no model overfitting. We can also observe that the higher is INLINEFORM1 the higher are the absolute values of the loss sets. This is not surprising because as the number of words to predict becomes higher the problem will tend to become harder. Also, because we keep the dimensionality of the embedding space constant (64 dimensions), it becomes increasingly hard to represent and differentiate larger vocabularies in the same hyper-volume. We believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute.
On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model.
Intrinsic Evaluation
Table TABREF30 presents results for the three different tests described in Section SECREF4 . The first (expected) result is that the coverage metrics increase with the size of the vocabulary being embedded, i.e., INLINEFORM0 . Because the Word Equivalence test set was specifically created for evaluating Twitter-based embedding, when embedding INLINEFORM1 = 32768 words we achieve almost 90% test coverage. On the other hand, for the Class Distinction test set - which was created by doing the cross product of the test cases of each class in Class Membership test set - we obtain very low coverage figures. This indicates that it is not always possible to re-use previously compiled gold-standard data, and that it will be important to compile gold-standard data directly from Twitter content if we want to perform a more precise evaluation.
The effect of varying the cosine similarity decision threshold from 0.70 to 0.80 for Class Membership test shows that the percentage of classified as correct test cases drops significantly. However, the drop is more accentuated when training with only a portion of the available data. The differences of using two alternative thresholds values is even higher in the Word Equivalence test.
The Word Equivalence test, in which we consider two words equivalent word if the cosine of the embedding vectors is higher than 0.95, revealed to be an extremely demanding test. Nevertheless, for INLINEFORM0 = 32768 the results are far superior, and for a much larger coverage, than for lower INLINEFORM1 . The same happens with the Class Membership test.
On the other hand, the Class Distinction test shows a different trend for larger values of INLINEFORM0 = 32768 but the coverage for other values of INLINEFORM1 is so low that becomes difficult to hypothesize about the reduced values of True Negatives (TN) percentage obtained for the largest INLINEFORM2 . It would be necessary to confirm this behavior with even larger values of INLINEFORM3 . One might hypothesize that the ability to distinguish between classes requires larger thresholds when INLINEFORM4 is large. Also, we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words.
Further Analysis regarding Evaluation Metrics
Despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available, these results also revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds. This sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding hyper-parameters, which was also observed in other recent works BIBREF15 .
By using these absolute thresholds as criteria for deciding similarity of words, we create a dependency between the evaluation metrics and the geometry of the embedded data. If we see the embedding data as a graph, this means that metrics will change if we apply scaling operations to certain parts of the graph, even if its structure (i.e. relative position of the embedded words) does not change.
For most practical purposes (including training downstream ML models) absolute distances have little meaning. What is fundamental is that the resulting embeddings are able to capture topological information: similar words should be closer to each other than they are to words that are dissimilar to them (under the various criteria of similarity we care about), independently of the absolute distances involved.
It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics.
Conclusions
Producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium. We implemented a neural word embedding model that embeds words based on n-gram information extracted from a sample of the Portuguese Twitter stream, and which can be seen as a flexible baseline for further experiments in the field. Work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from Twitter and adequate evaluation tests and gold-standard data.
Results show that using less than 50% of the available training examples for each vocabulary size might result in overfitting. The resulting embeddings obtain an interesting performance on intrinsic evaluation tests when trained a vocabulary containing the 32768 most frequent words in a Twitter sample of relatively small size. Nevertheless, results exhibit a skewness in the cosine similarity scores that should be further explored in future work. More specifically, the Class Distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values.
Therefore, a key area of future exploration has to do with better evaluation resources and metrics. We made some initial effort in this front. However, we believe that developing new intrinsic tests, agnostic to absolute values of metrics and concerned with topological aspects of the embedding space, and expanding gold-standard data with cases tailored for user-generated content, is of fundamental importance for the progress of this line of work.
Furthermore, we plan to make public available word embeddings trained from a large sample of 300M tweets collected from the Portuguese Twitter stream. This will require experimenting producing embeddings with higher dimensionality (to avoid the cosine skewness effect) and training with even larger vocabularies. Also, there is room for experimenting with some of the hyper-parameters of the model itself (e.g. activation functions, dimensions of the layers), which we know have impact on final results. | Class Membership Tests, Class Distinction Test, Word Equivalence Test |
5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212 | 5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212_1 | Q: What intrinsic evaluation metrics are used?
Text: Introduction
Word embeddings have great practical importance since they can be used as pre-computed high-density features to ML models, significantly reducing the amount of training data required in a variety of NLP tasks. However, there are several inter-related challenges with computing and consistently distributing word embeddings concerning the:
Not only the space of possibilities for each of these aspects is large, there are also challenges in performing a consistent large-scale evaluation of the resulting embeddings BIBREF0 . This makes systematic experimentation of alternative word-embedding configurations extremely difficult.
In this work, we make progress in trying to find good combinations of some of the previous parameters. We focus specifically in the task of computing word embeddings for processing the Portuguese Twitter stream. User-generated content (such as twitter messages) tends to be populated by words that are specific to the medium, and that are constantly being added by users. These dynamics pose challenges to NLP systems, which have difficulties in dealing with out of vocabulary words. Therefore, learning a semantic representation for those words directly from the user-generated stream - and as the words arise - would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words.
Starting from our own implementation of a neural word embedding model, which should be seen as a flexible baseline model for further experimentation, our research tries to answer the following practical questions:
By answering these questions based on a reasonably small sample of Twitter data (5M), we hope to find the best way to proceed and train embeddings for Twitter vocabulary using the much larger amount of Twitter data available (300M), but for which parameter experimentation would be unfeasible. This work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a large-scale database of embeddings for processing Portuguese Twitter data.
Related Work
There are several approaches to generating word embeddings. One can build models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe BIBREF1 , BIBREF2 , or one can extract such embeddings as by-products of more general models, which implicitly compute such word embeddings in the process of solving other language tasks.
Word embeddings methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional bag-of-words models. Moreover, this low dimensional space is able to capture lexical and semantic properties of words. Co-occurrence statistics are the fundamental information that allows creating such representations. Two approaches exist for building word embeddings. One creates a low rank approximation of the word co-occurrence matrix, such as in the case of Latent Semantic Analysis BIBREF3 and GloVe BIBREF2 . The other approach consists in extracting internal representations from neural network models of text BIBREF4 , BIBREF5 , BIBREF1 . Levy and Goldberg BIBREF6 showed that the two approaches are closely related.
Although, word embeddings research go back several decades, it was the recent developments of Deep Learning and the word2vec framework BIBREF1 that captured the attention of the NLP community. Moreover, Mikolov et al. BIBREF7 showed that embeddings trained using word2vec models (CBOW and Skip-gram) exhibit linear structure, allowing analogy questions of the form “man:woman::king:??.” and can boost performance of several text classification tasks.
One of the issues of recent work in training word embeddings is the variability of experimental setups reported. For instance, in the paper describing GloVe BIBREF2 authors trained their model on five corpora of different sizes and built a vocabulary of 400K most frequent words. Mikolov et al. BIBREF7 trained with 82K vocabulary while Mikolov et al. BIBREF1 was trained with 3M vocabulary. Recently, Arora et al. BIBREF8 proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models (e.g. word2vec and GloVe) and some hyper parameter choices. Authors evaluated their model using 68K vocabulary.
SemEval 2016-Task 4: Sentiment Analysis in Twitter organizers report that participants either used general purpose pre-trained word embeddings, or trained from Tweet 2016 dataset or “from some sort of dataset” BIBREF9 . However, participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results.
Recently, Rodrigues et al. BIBREF10 created and distributed the first general purpose embeddings for Portuguese. Word2vec gensim implementation was used and authors report results with different values for the parameters of the framework. Furthermore, authors used experts to translate well established word embeddings test sets for Portuguese language, which they also made publicly available and we use some of those in this work.
Our Neural Word Embedding Model
The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 , but ours is one layer deeper and is set to solve a slightly different word prediction task. Given a sequence of 5 words - INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 , the task the model tries to perform is that of predicting the middle word, INLINEFORM5 , based on the two words on the left - INLINEFORM6 INLINEFORM7 - and the two words on the right - INLINEFORM8 INLINEFORM9 : INLINEFORM10 . This should produce embeddings that closely capture distributional similarity, so that words that belong to the same semantic class, or which are synonyms and antonyms of each other, will be embedded in “close” regions of the embedding hyper-space.
Our neural model is composed of the following layers:
All neural activations in the model are sigmoid functions. The model was implemented using the Syntagma library which relies on Keras BIBREF11 for model development, and we train the model using the built-in ADAM BIBREF12 optimizer with the default parameters.
Experimental Setup
We are interested in assessing two aspects of the word embedding process. On one hand, we wish to evaluate the semantic quality of the produced embeddings. On the other, we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary INLINEFORM0 we try to embed. These aspects have fundamental practical importance for deciding how we should attempt to produce the large-scale database of embeddings we will provide in the future. All resources developed in this work are publicly available.
Apart from the size of the vocabulary to be processed ( INLINEFORM0 ), the hyperparamaters of the model that we could potentially explore are i) the dimensionality of the input word embeddings and ii) the dimensionality of the output word embeddings. As mentioned before, we set both to 64 bits after performing some quick manual experimentation. Full hyperparameter exploration is left for future work.
Our experimental testbed comprises a desktop with a nvidia TITAN X (Pascal), Intel Core Quad i7 3770K 3.5Ghz, 32 GB DDR3 RAM and a 180GB SSD drive.
Training Data
We randomly sampled 5M tweets from a corpus of 300M tweets collected from the Portuguese Twitter community BIBREF13 . The 5M comprise a total of 61.4M words (approx. 12 words per tweets in average). From those 5M tweets we generated a database containing 18.9M distinct 5-grams, along with their frequency counts. In this process, all text was down-cased. To help anonymizing the n-gram information, we substituted all the twitter handles by an artificial token “T_HANDLE". We also substituted all HTTP links by the token “LINK". We prepended two special tokens to complete the 5-grams generated from the first two words of the tweet, and we correspondingly appended two other special tokens to complete 5-grams centered around the two last tokens of the tweet.
Tokenization was perform by trivially separating tokens by blank space. No linguistic pre-processing, such as for example separating punctuation from words, was made. We opted for not doing any pre-processing for not introducing any linguistic bias from another tool (tokenization of user generated content is not a trivial problem). The most direct consequence of not performing any linguistic pre-processing is that of increasing the vocabulary size and diluting token counts. However, in principle, and given enough data, the embedding model should be able to learn the correct embeddings for both actual words (e.g. “ronaldo") and the words that have punctuation attached (e.g. “ronaldo!"). In practice, we believe that this can actually be an advantage for the downstream consumers of the embeddings, since they can also relax the requirements of their own tokenization stage. Overall, the dictionary thus produced contains approximately 1.3M distinct entries. Our dictionary was sorted by frequency, so the words with lowest index correspond to the most common words in the corpus.
We used the information from the 5-gram database to generate all training data used in the experiments. For a fixed size INLINEFORM0 of the target vocabulary to be embedded (e.g. INLINEFORM1 = 2048), we scanned the database to obtain all possible 5-grams for which all tokens were among the top INLINEFORM2 words of the dictionary (i.e. the top INLINEFORM3 most frequent words in the corpus). Depending on INLINEFORM4 , different numbers of valid training 5-grams were found in the database: the larger INLINEFORM5 the more valid 5-grams would pass the filter. The number of examples collected for each of the values of INLINEFORM6 is shown in Table TABREF16 .
Since one of the goals of our experiments is to understand the impact of using different amounts of training data, for each size of vocabulary to be embedded INLINEFORM0 we will run experiments training the models using 25%, 50%, 75% and 100% of the data available.
Metrics related with the Learning Process
We tracked metrics related to the learning process itself, as a function of the vocabulary size to be embedded INLINEFORM0 and of the fraction of training data used (25%, 50%, 75% and 100%). For all possible configurations, we recorded the values of the training and validation loss (cross entropy) after each epoch. Tracking these metrics serves as a minimalistic sanity check: if the model is not able to solve the word prediction task with some degree of success (e.g. if we observe no substantial decay in the losses) then one should not expect the embeddings to capture any of the distributional information they are supposed to capture.
Tests and Gold-Standard Data for Intrinsic Evaluation
Using the gold standard data (described below), we performed three types of tests:
Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year", “Portuguese Cities", “Smileys") should be close, since they are supposed to be found in mostly the same contexts.
Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.
Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque" abbreviated by “pq") and partial references (e.g. “slb and benfica") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).
Therefore, in our tests, two words are considered:
distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).
to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).
equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95).
We report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space, as observed in related work BIBREF14 , BIBREF15 . We used the following sources of data for testing Class Membership:
AP+Battig data. This data was collected from the evaluation data provided by BIBREF10 . These correspond to 29 semantic classes.
Twitter-Class - collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes. These include the following 6 sets (number of elements in brackets): smileys (13), months (12), countries (6), names (19), surnames (14) Portuguese cities (9).
For the Class Distinction test, we pair each element of each of the gold standard classes, with all the other elements from other classes (removing duplicate pairs since ordering does not matter), and we generate pairs of words which are supposed belong to different classes. For Word Equivalence test, we manually collected equivalente pairs, focusing on abbreviations that are popular in Twitters (e.g. “qt" INLINEFORM0 “quanto" or “lx" INLINEFORM1 “lisboa" and on frequent acronyms (e.g. “slb" INLINEFORM2 “benfica"). In total, we compiled 48 equivalence pairs.
For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95).
It is worth making a final comment about the gold standard data. Although we do not expect this gold standard data to be sufficient for a wide-spectrum evaluation of the resulting embeddings, it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics, and where it is not. These should still provide valuable indications for planning how to produce the much larger database of word embeddings.
Results and Analysis
We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded, and the volume of training data available that has been used. Table TABREF27 presents some overall statistics after training for 40 epochs.
The average time per epoch increases first with the size of the vocabulary to embed INLINEFORM0 (because the model will have more parameters), and then, for each INLINEFORM1 , with the volume of training data. Using our testbed (Section SECREF4 ), the total time of learning in our experiments varied from a minimum of 160 seconds, with INLINEFORM2 = 2048 and 25% of data, to a maximum of 22.5 hours, with INLINEFORM3 = 32768 and using 100% of the training data available (extracted from 5M tweets). These numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete Twitter corpus we have, consisting of 300M tweets.
We now analyze the learning process itself. We plot the training set loss and validation set loss for the different values of INLINEFORM0 (Figure FIGREF28 left) with 40 epochs and using all the available data. As expected, the loss is reducing after each epoch, with validation loss, although being slightly higher, following the same trend. When using 100% we see no model overfitting. We can also observe that the higher is INLINEFORM1 the higher are the absolute values of the loss sets. This is not surprising because as the number of words to predict becomes higher the problem will tend to become harder. Also, because we keep the dimensionality of the embedding space constant (64 dimensions), it becomes increasingly hard to represent and differentiate larger vocabularies in the same hyper-volume. We believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute.
On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model.
Intrinsic Evaluation
Table TABREF30 presents results for the three different tests described in Section SECREF4 . The first (expected) result is that the coverage metrics increase with the size of the vocabulary being embedded, i.e., INLINEFORM0 . Because the Word Equivalence test set was specifically created for evaluating Twitter-based embedding, when embedding INLINEFORM1 = 32768 words we achieve almost 90% test coverage. On the other hand, for the Class Distinction test set - which was created by doing the cross product of the test cases of each class in Class Membership test set - we obtain very low coverage figures. This indicates that it is not always possible to re-use previously compiled gold-standard data, and that it will be important to compile gold-standard data directly from Twitter content if we want to perform a more precise evaluation.
The effect of varying the cosine similarity decision threshold from 0.70 to 0.80 for Class Membership test shows that the percentage of classified as correct test cases drops significantly. However, the drop is more accentuated when training with only a portion of the available data. The differences of using two alternative thresholds values is even higher in the Word Equivalence test.
The Word Equivalence test, in which we consider two words equivalent word if the cosine of the embedding vectors is higher than 0.95, revealed to be an extremely demanding test. Nevertheless, for INLINEFORM0 = 32768 the results are far superior, and for a much larger coverage, than for lower INLINEFORM1 . The same happens with the Class Membership test.
On the other hand, the Class Distinction test shows a different trend for larger values of INLINEFORM0 = 32768 but the coverage for other values of INLINEFORM1 is so low that becomes difficult to hypothesize about the reduced values of True Negatives (TN) percentage obtained for the largest INLINEFORM2 . It would be necessary to confirm this behavior with even larger values of INLINEFORM3 . One might hypothesize that the ability to distinguish between classes requires larger thresholds when INLINEFORM4 is large. Also, we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words.
Further Analysis regarding Evaluation Metrics
Despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available, these results also revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds. This sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding hyper-parameters, which was also observed in other recent works BIBREF15 .
By using these absolute thresholds as criteria for deciding similarity of words, we create a dependency between the evaluation metrics and the geometry of the embedded data. If we see the embedding data as a graph, this means that metrics will change if we apply scaling operations to certain parts of the graph, even if its structure (i.e. relative position of the embedded words) does not change.
For most practical purposes (including training downstream ML models) absolute distances have little meaning. What is fundamental is that the resulting embeddings are able to capture topological information: similar words should be closer to each other than they are to words that are dissimilar to them (under the various criteria of similarity we care about), independently of the absolute distances involved.
It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics.
Conclusions
Producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium. We implemented a neural word embedding model that embeds words based on n-gram information extracted from a sample of the Portuguese Twitter stream, and which can be seen as a flexible baseline for further experiments in the field. Work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from Twitter and adequate evaluation tests and gold-standard data.
Results show that using less than 50% of the available training examples for each vocabulary size might result in overfitting. The resulting embeddings obtain an interesting performance on intrinsic evaluation tests when trained a vocabulary containing the 32768 most frequent words in a Twitter sample of relatively small size. Nevertheless, results exhibit a skewness in the cosine similarity scores that should be further explored in future work. More specifically, the Class Distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values.
Therefore, a key area of future exploration has to do with better evaluation resources and metrics. We made some initial effort in this front. However, we believe that developing new intrinsic tests, agnostic to absolute values of metrics and concerned with topological aspects of the embedding space, and expanding gold-standard data with cases tailored for user-generated content, is of fundamental importance for the progress of this line of work.
Furthermore, we plan to make public available word embeddings trained from a large sample of 300M tweets collected from the Portuguese Twitter stream. This will require experimenting producing embeddings with higher dimensionality (to avoid the cosine skewness effect) and training with even larger vocabularies. Also, there is room for experimenting with some of the hyper-parameters of the model itself (e.g. activation functions, dimensions of the layers), which we know have impact on final results. | coverage metric, being distinct (cosine INLINEFORM0 0.7 or 0.8), belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), being equivalent (cosine INLINEFORM2 0.85 or 0.95) |
7ce7edd06925a943e32b59f3e7b5159ccb7acaf6 | 7ce7edd06925a943e32b59f3e7b5159ccb7acaf6_0 | Q: What experimental results suggest that using less than 50% of the available training examples might result in overfitting?
Text: Introduction
Word embeddings have great practical importance since they can be used as pre-computed high-density features to ML models, significantly reducing the amount of training data required in a variety of NLP tasks. However, there are several inter-related challenges with computing and consistently distributing word embeddings concerning the:
Not only the space of possibilities for each of these aspects is large, there are also challenges in performing a consistent large-scale evaluation of the resulting embeddings BIBREF0 . This makes systematic experimentation of alternative word-embedding configurations extremely difficult.
In this work, we make progress in trying to find good combinations of some of the previous parameters. We focus specifically in the task of computing word embeddings for processing the Portuguese Twitter stream. User-generated content (such as twitter messages) tends to be populated by words that are specific to the medium, and that are constantly being added by users. These dynamics pose challenges to NLP systems, which have difficulties in dealing with out of vocabulary words. Therefore, learning a semantic representation for those words directly from the user-generated stream - and as the words arise - would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words.
Starting from our own implementation of a neural word embedding model, which should be seen as a flexible baseline model for further experimentation, our research tries to answer the following practical questions:
By answering these questions based on a reasonably small sample of Twitter data (5M), we hope to find the best way to proceed and train embeddings for Twitter vocabulary using the much larger amount of Twitter data available (300M), but for which parameter experimentation would be unfeasible. This work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a large-scale database of embeddings for processing Portuguese Twitter data.
Related Work
There are several approaches to generating word embeddings. One can build models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe BIBREF1 , BIBREF2 , or one can extract such embeddings as by-products of more general models, which implicitly compute such word embeddings in the process of solving other language tasks.
Word embeddings methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional bag-of-words models. Moreover, this low dimensional space is able to capture lexical and semantic properties of words. Co-occurrence statistics are the fundamental information that allows creating such representations. Two approaches exist for building word embeddings. One creates a low rank approximation of the word co-occurrence matrix, such as in the case of Latent Semantic Analysis BIBREF3 and GloVe BIBREF2 . The other approach consists in extracting internal representations from neural network models of text BIBREF4 , BIBREF5 , BIBREF1 . Levy and Goldberg BIBREF6 showed that the two approaches are closely related.
Although, word embeddings research go back several decades, it was the recent developments of Deep Learning and the word2vec framework BIBREF1 that captured the attention of the NLP community. Moreover, Mikolov et al. BIBREF7 showed that embeddings trained using word2vec models (CBOW and Skip-gram) exhibit linear structure, allowing analogy questions of the form “man:woman::king:??.” and can boost performance of several text classification tasks.
One of the issues of recent work in training word embeddings is the variability of experimental setups reported. For instance, in the paper describing GloVe BIBREF2 authors trained their model on five corpora of different sizes and built a vocabulary of 400K most frequent words. Mikolov et al. BIBREF7 trained with 82K vocabulary while Mikolov et al. BIBREF1 was trained with 3M vocabulary. Recently, Arora et al. BIBREF8 proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models (e.g. word2vec and GloVe) and some hyper parameter choices. Authors evaluated their model using 68K vocabulary.
SemEval 2016-Task 4: Sentiment Analysis in Twitter organizers report that participants either used general purpose pre-trained word embeddings, or trained from Tweet 2016 dataset or “from some sort of dataset” BIBREF9 . However, participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results.
Recently, Rodrigues et al. BIBREF10 created and distributed the first general purpose embeddings for Portuguese. Word2vec gensim implementation was used and authors report results with different values for the parameters of the framework. Furthermore, authors used experts to translate well established word embeddings test sets for Portuguese language, which they also made publicly available and we use some of those in this work.
Our Neural Word Embedding Model
The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 , but ours is one layer deeper and is set to solve a slightly different word prediction task. Given a sequence of 5 words - INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 , the task the model tries to perform is that of predicting the middle word, INLINEFORM5 , based on the two words on the left - INLINEFORM6 INLINEFORM7 - and the two words on the right - INLINEFORM8 INLINEFORM9 : INLINEFORM10 . This should produce embeddings that closely capture distributional similarity, so that words that belong to the same semantic class, or which are synonyms and antonyms of each other, will be embedded in “close” regions of the embedding hyper-space.
Our neural model is composed of the following layers:
All neural activations in the model are sigmoid functions. The model was implemented using the Syntagma library which relies on Keras BIBREF11 for model development, and we train the model using the built-in ADAM BIBREF12 optimizer with the default parameters.
Experimental Setup
We are interested in assessing two aspects of the word embedding process. On one hand, we wish to evaluate the semantic quality of the produced embeddings. On the other, we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary INLINEFORM0 we try to embed. These aspects have fundamental practical importance for deciding how we should attempt to produce the large-scale database of embeddings we will provide in the future. All resources developed in this work are publicly available.
Apart from the size of the vocabulary to be processed ( INLINEFORM0 ), the hyperparamaters of the model that we could potentially explore are i) the dimensionality of the input word embeddings and ii) the dimensionality of the output word embeddings. As mentioned before, we set both to 64 bits after performing some quick manual experimentation. Full hyperparameter exploration is left for future work.
Our experimental testbed comprises a desktop with a nvidia TITAN X (Pascal), Intel Core Quad i7 3770K 3.5Ghz, 32 GB DDR3 RAM and a 180GB SSD drive.
Training Data
We randomly sampled 5M tweets from a corpus of 300M tweets collected from the Portuguese Twitter community BIBREF13 . The 5M comprise a total of 61.4M words (approx. 12 words per tweets in average). From those 5M tweets we generated a database containing 18.9M distinct 5-grams, along with their frequency counts. In this process, all text was down-cased. To help anonymizing the n-gram information, we substituted all the twitter handles by an artificial token “T_HANDLE". We also substituted all HTTP links by the token “LINK". We prepended two special tokens to complete the 5-grams generated from the first two words of the tweet, and we correspondingly appended two other special tokens to complete 5-grams centered around the two last tokens of the tweet.
Tokenization was perform by trivially separating tokens by blank space. No linguistic pre-processing, such as for example separating punctuation from words, was made. We opted for not doing any pre-processing for not introducing any linguistic bias from another tool (tokenization of user generated content is not a trivial problem). The most direct consequence of not performing any linguistic pre-processing is that of increasing the vocabulary size and diluting token counts. However, in principle, and given enough data, the embedding model should be able to learn the correct embeddings for both actual words (e.g. “ronaldo") and the words that have punctuation attached (e.g. “ronaldo!"). In practice, we believe that this can actually be an advantage for the downstream consumers of the embeddings, since they can also relax the requirements of their own tokenization stage. Overall, the dictionary thus produced contains approximately 1.3M distinct entries. Our dictionary was sorted by frequency, so the words with lowest index correspond to the most common words in the corpus.
We used the information from the 5-gram database to generate all training data used in the experiments. For a fixed size INLINEFORM0 of the target vocabulary to be embedded (e.g. INLINEFORM1 = 2048), we scanned the database to obtain all possible 5-grams for which all tokens were among the top INLINEFORM2 words of the dictionary (i.e. the top INLINEFORM3 most frequent words in the corpus). Depending on INLINEFORM4 , different numbers of valid training 5-grams were found in the database: the larger INLINEFORM5 the more valid 5-grams would pass the filter. The number of examples collected for each of the values of INLINEFORM6 is shown in Table TABREF16 .
Since one of the goals of our experiments is to understand the impact of using different amounts of training data, for each size of vocabulary to be embedded INLINEFORM0 we will run experiments training the models using 25%, 50%, 75% and 100% of the data available.
Metrics related with the Learning Process
We tracked metrics related to the learning process itself, as a function of the vocabulary size to be embedded INLINEFORM0 and of the fraction of training data used (25%, 50%, 75% and 100%). For all possible configurations, we recorded the values of the training and validation loss (cross entropy) after each epoch. Tracking these metrics serves as a minimalistic sanity check: if the model is not able to solve the word prediction task with some degree of success (e.g. if we observe no substantial decay in the losses) then one should not expect the embeddings to capture any of the distributional information they are supposed to capture.
Tests and Gold-Standard Data for Intrinsic Evaluation
Using the gold standard data (described below), we performed three types of tests:
Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year", “Portuguese Cities", “Smileys") should be close, since they are supposed to be found in mostly the same contexts.
Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.
Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque" abbreviated by “pq") and partial references (e.g. “slb and benfica") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).
Therefore, in our tests, two words are considered:
distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).
to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).
equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95).
We report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space, as observed in related work BIBREF14 , BIBREF15 . We used the following sources of data for testing Class Membership:
AP+Battig data. This data was collected from the evaluation data provided by BIBREF10 . These correspond to 29 semantic classes.
Twitter-Class - collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes. These include the following 6 sets (number of elements in brackets): smileys (13), months (12), countries (6), names (19), surnames (14) Portuguese cities (9).
For the Class Distinction test, we pair each element of each of the gold standard classes, with all the other elements from other classes (removing duplicate pairs since ordering does not matter), and we generate pairs of words which are supposed belong to different classes. For Word Equivalence test, we manually collected equivalente pairs, focusing on abbreviations that are popular in Twitters (e.g. “qt" INLINEFORM0 “quanto" or “lx" INLINEFORM1 “lisboa" and on frequent acronyms (e.g. “slb" INLINEFORM2 “benfica"). In total, we compiled 48 equivalence pairs.
For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95).
It is worth making a final comment about the gold standard data. Although we do not expect this gold standard data to be sufficient for a wide-spectrum evaluation of the resulting embeddings, it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics, and where it is not. These should still provide valuable indications for planning how to produce the much larger database of word embeddings.
Results and Analysis
We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded, and the volume of training data available that has been used. Table TABREF27 presents some overall statistics after training for 40 epochs.
The average time per epoch increases first with the size of the vocabulary to embed INLINEFORM0 (because the model will have more parameters), and then, for each INLINEFORM1 , with the volume of training data. Using our testbed (Section SECREF4 ), the total time of learning in our experiments varied from a minimum of 160 seconds, with INLINEFORM2 = 2048 and 25% of data, to a maximum of 22.5 hours, with INLINEFORM3 = 32768 and using 100% of the training data available (extracted from 5M tweets). These numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete Twitter corpus we have, consisting of 300M tweets.
We now analyze the learning process itself. We plot the training set loss and validation set loss for the different values of INLINEFORM0 (Figure FIGREF28 left) with 40 epochs and using all the available data. As expected, the loss is reducing after each epoch, with validation loss, although being slightly higher, following the same trend. When using 100% we see no model overfitting. We can also observe that the higher is INLINEFORM1 the higher are the absolute values of the loss sets. This is not surprising because as the number of words to predict becomes higher the problem will tend to become harder. Also, because we keep the dimensionality of the embedding space constant (64 dimensions), it becomes increasingly hard to represent and differentiate larger vocabularies in the same hyper-volume. We believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute.
On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model.
Intrinsic Evaluation
Table TABREF30 presents results for the three different tests described in Section SECREF4 . The first (expected) result is that the coverage metrics increase with the size of the vocabulary being embedded, i.e., INLINEFORM0 . Because the Word Equivalence test set was specifically created for evaluating Twitter-based embedding, when embedding INLINEFORM1 = 32768 words we achieve almost 90% test coverage. On the other hand, for the Class Distinction test set - which was created by doing the cross product of the test cases of each class in Class Membership test set - we obtain very low coverage figures. This indicates that it is not always possible to re-use previously compiled gold-standard data, and that it will be important to compile gold-standard data directly from Twitter content if we want to perform a more precise evaluation.
The effect of varying the cosine similarity decision threshold from 0.70 to 0.80 for Class Membership test shows that the percentage of classified as correct test cases drops significantly. However, the drop is more accentuated when training with only a portion of the available data. The differences of using two alternative thresholds values is even higher in the Word Equivalence test.
The Word Equivalence test, in which we consider two words equivalent word if the cosine of the embedding vectors is higher than 0.95, revealed to be an extremely demanding test. Nevertheless, for INLINEFORM0 = 32768 the results are far superior, and for a much larger coverage, than for lower INLINEFORM1 . The same happens with the Class Membership test.
On the other hand, the Class Distinction test shows a different trend for larger values of INLINEFORM0 = 32768 but the coverage for other values of INLINEFORM1 is so low that becomes difficult to hypothesize about the reduced values of True Negatives (TN) percentage obtained for the largest INLINEFORM2 . It would be necessary to confirm this behavior with even larger values of INLINEFORM3 . One might hypothesize that the ability to distinguish between classes requires larger thresholds when INLINEFORM4 is large. Also, we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words.
Further Analysis regarding Evaluation Metrics
Despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available, these results also revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds. This sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding hyper-parameters, which was also observed in other recent works BIBREF15 .
By using these absolute thresholds as criteria for deciding similarity of words, we create a dependency between the evaluation metrics and the geometry of the embedded data. If we see the embedding data as a graph, this means that metrics will change if we apply scaling operations to certain parts of the graph, even if its structure (i.e. relative position of the embedded words) does not change.
For most practical purposes (including training downstream ML models) absolute distances have little meaning. What is fundamental is that the resulting embeddings are able to capture topological information: similar words should be closer to each other than they are to words that are dissimilar to them (under the various criteria of similarity we care about), independently of the absolute distances involved.
It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics.
Conclusions
Producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium. We implemented a neural word embedding model that embeds words based on n-gram information extracted from a sample of the Portuguese Twitter stream, and which can be seen as a flexible baseline for further experiments in the field. Work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from Twitter and adequate evaluation tests and gold-standard data.
Results show that using less than 50% of the available training examples for each vocabulary size might result in overfitting. The resulting embeddings obtain an interesting performance on intrinsic evaluation tests when trained a vocabulary containing the 32768 most frequent words in a Twitter sample of relatively small size. Nevertheless, results exhibit a skewness in the cosine similarity scores that should be further explored in future work. More specifically, the Class Distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values.
Therefore, a key area of future exploration has to do with better evaluation resources and metrics. We made some initial effort in this front. However, we believe that developing new intrinsic tests, agnostic to absolute values of metrics and concerned with topological aspects of the embedding space, and expanding gold-standard data with cases tailored for user-generated content, is of fundamental importance for the progress of this line of work.
Furthermore, we plan to make public available word embeddings trained from a large sample of 300M tweets collected from the Portuguese Twitter stream. This will require experimenting producing embeddings with higher dimensionality (to avoid the cosine skewness effect) and training with even larger vocabularies. Also, there is room for experimenting with some of the hyper-parameters of the model itself (e.g. activation functions, dimensions of the layers), which we know have impact on final results. | consistent increase in the validation loss after about 15 epochs |
a883bb41449794e0a63b716d9766faea034eb359 | a883bb41449794e0a63b716d9766faea034eb359_0 | Q: What multimodality is available in the dataset?
Text: Introduction
A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.
Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain.
In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps.
Visual Reasoning in RecipeQA
In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.
Visual Cloze. In the visual cloze task, the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder. A model should select the correct one from a multiple-choice list of four answer candidates to fill in the missing piece. In that regard, the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities.
Visual Coherence. The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe. To succeed in this task, a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision.
Visual Ordering. The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text. The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order. Here, a model needs to infer the temporal relations between the images and align them with the recipe steps.
Procedural Reasoning Networks
In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.
Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders.
Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5.
Attention Module computes context-aware query vectors and query-aware context vectors as well as query-aware memory vectors.
Modeling Module employs two multi-layered RNNs to encode previous layers outputs.
Output Module scores a candidate answer from the given multiple-choice list.
At a high level, as the model is reading the cooking recipe, it continually updates the internal memory representations of the entities (ingredients) based on the content of each step – it keeps track of changes in the states of the entities, providing an entity-centric summary of the recipe. The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities. All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary.
Procedural Reasoning Networks ::: Input Module
Let the triple $(\mathbf {R},\mathbf {Q},\mathbf {A})$ be a sample input. Here, $\mathbf {R}$ denotes the input recipe which contains textual instructions composed of $N$ words in total. $\mathbf {Q}$ represents the question that consists of a sequence of $M$ images. $\mathbf {A}$ denotes an answer that is either a single image or a series of $L$ images depending on the reasoning task. In particular, for the visual cloze and the visual coherence type questions, the answer contains a single image ($L=1$) and for the visual ordering task, it includes a sequence.
We encode the input recipe $\mathbf {R}$ at character, word, and step levels. Character-level embedding layer uses a convolutional neural network, namely CharCNN model by BIBREF7, which outputs character level embeddings for each word and alleviates the issue of out-of-vocabulary (OOV) words. In word embedding layer, we use a pretrained GloVe model BIBREF8 and extract word-level embeddings. The concatenation of the character and the word embeddings are then fed to a two-layer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe. This results in the matrix $\mathbf {R}^{\prime } \in \mathbb {R}^{2d \times N}$.
On top of these layers, we have another layer that encodes the steps of the recipe in an individual manner. Specifically, we obtain a step-level contextual embedding of the input recipe containing $T$ steps as $\mathcal {S}=(\mathbf {s}_1,\mathbf {s}_2,\dots ,\mathbf {s}_T)$ where $\mathbf {s}_i$ represents the final state of a BiLSTM encoding the $i$-th step of the recipe obtained from the character and word-level embeddings of the tokens exist in the corresponding step.
We represent both the question $\mathbf {Q}$ and the answer $\mathbf {A}$ in terms of visual embeddings. Here, we employ a pretrained ResNet-50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a real-valued 2048-d vector using features from the penultimate average-pool layer. Then these embeddings are passed first to a multilayer perceptron (MLP) and then its outputs are fed to a BiLSTM. We then form a matrix $\mathbf {Q}^{\prime } \in \mathbb {R}^{2d \times M}$ for the question by concatenating the cell states of the BiLSTM. For the visual ordering task, to represent the sequence of images in the answer with a single vector, we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM. Finally, for all tasks, these computations produce answer embeddings denoted by $\mathbf {a} \in \mathbb {R}^{2d \times 1}$.
Procedural Reasoning Networks ::: Reasoning Module
As mentioned before, comprehending a cooking recipe is mostly about entities (basic ingredients) and actions (cooking activities) described in the recipe instructions. Each action leads to changes in the states of the entities, which usually affects their visual characteristics. A change rarely occurs in isolation; in most cases, the action affects multiple entities at once. Hence, in our reasoning module, we have an explicit memory component implemented with relational memory units BIBREF5. This helps us to keep track of the entities, their state changes and their relations in relation to each other over the course of the recipe (see Fig. FIGREF14). As we will examine in more detail in Section SECREF4, it also greatly improves the interpretability of model outputs.
Specifically, we set up the memory with a memory matrix $\mathbf {E} \in \mathbb {R}^{d_E \times K}$ by extracting $K$ entities (ingredients) from the first step of the recipe. We initialize each memory cell $\mathbf {e}_i$ representing a specific entity by its CharCNN and pre-trained GloVe embeddings. From now on, we will use the terms memory cells and entities interchangeably throughout the paper. Since the input recipe is given in the form of a procedural text decomposed into a number of steps, we update the memory cells after each step, reflecting the state changes happened on the entities. This update procedure is modelled via a relational recurrent neural network (R-RNN), recently proposed by BIBREF5. It is built on a 2-dimensional LSTM model whose matrix of cell states represent our memory matrix $\mathbf {E}$. Here, each row $i$ of the matrix $\mathbf {E}$ refers to a specific entity $\mathbf {e}_i$ and is updated after each recipe step $t$ as follows:
where $\mathbf {s}_{t}$ denotes the embedding of recipe step $t$ and $\mathbf {\phi }_{i,t}=(\mathbf {h}_{i,t},\mathbf {e}_{i,t})$ is the cell state of the R-RNN at step $t$ with $\mathbf {h}_{i,t}$ and $\mathbf {e}_{i,t}$ being the $i$-th row of the hidden state of the R-RNN and the dynamic representation of entity $\mathbf {e}_{i}$ at the step $t$, respectively. The R-RNN model exploits a multi-headed self-attention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase.
In Fig. FIGREF14, we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe. In particular, the attention matrix at a specific time shows the attention flow from one entity (memory cell) to another along with the attention weights to the corresponding recipe step (offset column). The color intensity shows the magnitude of the attention weights. As can be seen from the figure, the internal representations of the entities are actively updated at each step. Moreover, as argued in BIBREF5, this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others. Here, we should note that it is often difficult to make sense of these attention weights. However, we observe that the attention matrix changes very gradually near the completion of the recipe.
Procedural Reasoning Networks ::: Attention Module
Attention module is in charge of linking the question with the recipe text and the entities present in the recipe. It takes the matrices $\mathbf {Q^{\prime }}$ and $\mathbf {R}^{\prime }$ from the input module, and $\mathbf {E}$ from the reasoning module and constructs the question-aware recipe representation $\mathbf {G}$ and the question-aware entity representation $\mathbf {Y}$. Following the attention flow mechanism described in BIBREF14, we specifically calculate attentions in four different directions: (1) from question to recipe, (2) from recipe to question, (3) from question to entities, and (4) from entities to question.
The first two of these attentions require computing a shared affinity matrix $\mathbf {S}^R \in \mathbb {R}^{N \times M}$ with $\mathbf {S}^R_{i,j}$ indicating the similarity between $i$-th recipe word and $j$-th image in the question estimated by
where $\mathbf {w}^{\top }_{R}$ is a trainable weight vector, $\circ $ and $[;]$ denote elementwise multiplication and concatenation operations, respectively.
Recipe-to-question attention determines the images within the question that is most relevant to each word of the recipe. Let $\mathbf {\tilde{Q}} \in \mathbb {R}^{2d \times N}$ represent the recipe-to-question attention matrix with its $i$-th column being given by $ \mathbf {\tilde{Q}}_i=\sum _j \mathbf {a}_{ij}\mathbf {Q}^{\prime }_j$ where the attention weight is computed by $\mathbf {a}_i=\operatorname{softmax}(\mathbf {S}^R_{i}) \in \mathbb {R}^M$.
Question-to-recipe attention signifies the words within the recipe that have the closest similarity to each image in the question, and construct an attended recipe vector given by $ \tilde{\mathbf {r}}=\sum _{i}\mathbf {b}_i\mathbf {R}^{\prime }_i$ with the attention weight is calculated by $\mathbf {b}=\operatorname{softmax}(\operatorname{max}_{\mathit {col}}(\mathbf {S}^R)) \in \mathbb {R}^{N}$ where $\operatorname{max}_{\mathit {col}}$ denotes the maximum function across the column. The question-to-recipe matrix is then obtained by replicating $\tilde{\mathbf {r}}$ $N$ times across the column, giving $\tilde{\mathbf {R}} \in \mathbb {R}^{2d \times N}$.
Then, we construct the question aware representation of the input recipe, $\mathbf {G}$, with its $i$-th column $\mathbf {G}_i \in \mathbb {R}^{8d \times N}$ denoting the final embedding of $i$-th word given by
Attentions from question to entities, and from entities to question are computed in a way similar to the ones described above. The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities $\mathbf {E}$ and the question $\mathbf {Q}^{\prime }$. These attentions are then used to construct the question aware representation of entities, denoted by $\mathbf {Y}$, that links and integrates the images in the question and the entities in the input recipe.
Procedural Reasoning Networks ::: Modeling Module
Modeling module takes the question-aware representations of the recipe $\mathbf {G}$ and the entities $\mathbf {Y}$, and forms their combined vector representation. For this purpose, we first use a two-layer BiLSTM to read the question-aware recipe $\mathbf {G}$ and to encode the interactions among the words conditioned on the question. For each direction of BiLSTM , we use its hidden state after reading the last token as its output. In the end, we obtain a vector embedding $\mathbf {c} \in \mathbb {R}^{2d \times 1}$. Similarly, we employ a second BiLSTM, this time, over the entities $\mathbf {Y}$, which results in another vector embedding $\mathbf {f} \in \mathbb {R}^{2d_E \times 1}$. Finally, these vector representations are concatenated and then projected to a fixed size representation using $\mathbf {o}=\varphi _o(\left[\mathbf {c}; \mathbf {f}\right]) \in \mathbb {R}^{2d \times 1}$ where $\varphi _o$ is a multilayer perceptron with $\operatorname{tanh}$ activation function.
Procedural Reasoning Networks ::: Output Module
The output module takes the output of the modeling module, encoding vector embeddings of the question-aware recipe and the entities $\mathbf {Y}$, and the embedding of the answer $\mathbf {A}$, and returns a similarity score which is used while determining the correct answer. Among all the candidate answer, the one having the highest similarity score is chosen as the correct answer. To train our proposed procedural reasoning network, we employ a hinge ranking loss BIBREF15, similar to the one used in BIBREF2, given below.
where $\gamma $ is the margin parameter, $\mathbf {a}_+$ and $\mathbf {a}_{-}$ are the correct and the incorrect answers, respectively.
Experiments
In this section, we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks (PRN) model.
Experiments ::: Entity Extraction
Given a recipe, we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients. While determining the ingredients, we exploit Recipe1M BIBREF16 and Kaggle What’s Cooking Recipes BIBREF17 datasets, and form our dictionary using the most commonly used ingredients in the training set of RecipeQA. For the cases when no entity can be extracted from the recipe automatically (20 recipes in total), we manually annotate those recipes with the related entities.
Experiments ::: Training Details
In our experiments, we separately trained models on each task, as well as we investigated multi-task learning where a single model is trained to solve all these tasks at once. In total, the PRN architecture consists of $\sim $12M trainable parameters. We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6. We used Adam optimizer with a learning rate of 1e-4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve. We considered a batch size of 32 due to our hardware constraints. In the multi-task setting, batches are sampled round-robin from all tasks, where each batch is solely composed of examples from one task. We performed our experiments on a system containing four NVIDIA GTX-1080Ti GPUs, and training a single model took around 2 hours. We employed the same hyperparameters for all the baseline systems. We plan to share our code and model implementation after the review process.
Experiments ::: Baselines
We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.
Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.
Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.
BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.
BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates.
Experiments ::: Results
Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.
In Fig. FIGREF28, we illustrate the entity embeddings space by projecting the learned embeddings from the step-by-step memory snapshots through time with t-SNE to 3-d space from 200-d vector space. Color codes denote the categories of the cooking recipes. As can be seen, these step-aware embeddings show clear clustering of these categories. Moreover, within each cluster, the entities are grouped together in terms of their state characteristics. For instance, in the zoomed parts of the figure, chopped and sliced, or stirred and whisked entities are placed close to each other.
Fig. FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step. Here, we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data. This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entity's state in the corresponding step.
Related Work
In recent years, tracking entities and their state changes have been explored in the literature from a variety of perspectives. In an early work, BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text. BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations. BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state. The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update.
BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state. BIBREF25 has shown the importance of coreference signals for reading comprehension task. More recently, BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks. On language modeling task, BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling, coreference resolution, and entity prediction.
Our work builds upon and contributes to the growing literature on tracking states changes in procedural text. BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe. BIBREF1 proposed another entity-aware model to track entity states in scientific processes. BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes. In a follow-up work, BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts. BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the ever-changing entities states. As discussed in the introduction, however, these previous methods use a strong inductive bias and assume that state labels are present during training. In our study, we deliberately focus on unlabeled procedural data and ask the question: Can multimodality help to identify and provide insights to understanding state changes.
Conclusion
We have presented a new neural architecture called Procedural Reasoning Networks (PRN) for multimodal understanding of step-by-step instructions. Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure. Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models, indicating that it better understands the procedural text and the accompanying images. Additionally, we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entity-level supervision. Although we achieve state-of-the-art results on RecipeQA, clearly there is still room for improvement compared to human performance. We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks.
Acknowledgements
We thank the anonymous reviewers and area chairs for their invaluable feedback. This work was supported by TUBA GEBIP fellowship awarded to E. Erdem; and by the MMVC project via an Institutional Links grant (Project No. 217E054) under the Newton-Katip Çelebi Fund partnership funded by the Scientific and Technological Research Council of Turkey (TUBITAK) and the British Council. We also thank NVIDIA Corporation for the donation of GPUs used in this research. | context is a procedural text, the question and the multiple choice answers are composed of images |
a883bb41449794e0a63b716d9766faea034eb359 | a883bb41449794e0a63b716d9766faea034eb359_1 | Q: What multimodality is available in the dataset?
Text: Introduction
A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.
Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain.
In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps.
Visual Reasoning in RecipeQA
In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.
Visual Cloze. In the visual cloze task, the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder. A model should select the correct one from a multiple-choice list of four answer candidates to fill in the missing piece. In that regard, the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities.
Visual Coherence. The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe. To succeed in this task, a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision.
Visual Ordering. The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text. The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order. Here, a model needs to infer the temporal relations between the images and align them with the recipe steps.
Procedural Reasoning Networks
In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.
Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders.
Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5.
Attention Module computes context-aware query vectors and query-aware context vectors as well as query-aware memory vectors.
Modeling Module employs two multi-layered RNNs to encode previous layers outputs.
Output Module scores a candidate answer from the given multiple-choice list.
At a high level, as the model is reading the cooking recipe, it continually updates the internal memory representations of the entities (ingredients) based on the content of each step – it keeps track of changes in the states of the entities, providing an entity-centric summary of the recipe. The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities. All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary.
Procedural Reasoning Networks ::: Input Module
Let the triple $(\mathbf {R},\mathbf {Q},\mathbf {A})$ be a sample input. Here, $\mathbf {R}$ denotes the input recipe which contains textual instructions composed of $N$ words in total. $\mathbf {Q}$ represents the question that consists of a sequence of $M$ images. $\mathbf {A}$ denotes an answer that is either a single image or a series of $L$ images depending on the reasoning task. In particular, for the visual cloze and the visual coherence type questions, the answer contains a single image ($L=1$) and for the visual ordering task, it includes a sequence.
We encode the input recipe $\mathbf {R}$ at character, word, and step levels. Character-level embedding layer uses a convolutional neural network, namely CharCNN model by BIBREF7, which outputs character level embeddings for each word and alleviates the issue of out-of-vocabulary (OOV) words. In word embedding layer, we use a pretrained GloVe model BIBREF8 and extract word-level embeddings. The concatenation of the character and the word embeddings are then fed to a two-layer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe. This results in the matrix $\mathbf {R}^{\prime } \in \mathbb {R}^{2d \times N}$.
On top of these layers, we have another layer that encodes the steps of the recipe in an individual manner. Specifically, we obtain a step-level contextual embedding of the input recipe containing $T$ steps as $\mathcal {S}=(\mathbf {s}_1,\mathbf {s}_2,\dots ,\mathbf {s}_T)$ where $\mathbf {s}_i$ represents the final state of a BiLSTM encoding the $i$-th step of the recipe obtained from the character and word-level embeddings of the tokens exist in the corresponding step.
We represent both the question $\mathbf {Q}$ and the answer $\mathbf {A}$ in terms of visual embeddings. Here, we employ a pretrained ResNet-50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a real-valued 2048-d vector using features from the penultimate average-pool layer. Then these embeddings are passed first to a multilayer perceptron (MLP) and then its outputs are fed to a BiLSTM. We then form a matrix $\mathbf {Q}^{\prime } \in \mathbb {R}^{2d \times M}$ for the question by concatenating the cell states of the BiLSTM. For the visual ordering task, to represent the sequence of images in the answer with a single vector, we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM. Finally, for all tasks, these computations produce answer embeddings denoted by $\mathbf {a} \in \mathbb {R}^{2d \times 1}$.
Procedural Reasoning Networks ::: Reasoning Module
As mentioned before, comprehending a cooking recipe is mostly about entities (basic ingredients) and actions (cooking activities) described in the recipe instructions. Each action leads to changes in the states of the entities, which usually affects their visual characteristics. A change rarely occurs in isolation; in most cases, the action affects multiple entities at once. Hence, in our reasoning module, we have an explicit memory component implemented with relational memory units BIBREF5. This helps us to keep track of the entities, their state changes and their relations in relation to each other over the course of the recipe (see Fig. FIGREF14). As we will examine in more detail in Section SECREF4, it also greatly improves the interpretability of model outputs.
Specifically, we set up the memory with a memory matrix $\mathbf {E} \in \mathbb {R}^{d_E \times K}$ by extracting $K$ entities (ingredients) from the first step of the recipe. We initialize each memory cell $\mathbf {e}_i$ representing a specific entity by its CharCNN and pre-trained GloVe embeddings. From now on, we will use the terms memory cells and entities interchangeably throughout the paper. Since the input recipe is given in the form of a procedural text decomposed into a number of steps, we update the memory cells after each step, reflecting the state changes happened on the entities. This update procedure is modelled via a relational recurrent neural network (R-RNN), recently proposed by BIBREF5. It is built on a 2-dimensional LSTM model whose matrix of cell states represent our memory matrix $\mathbf {E}$. Here, each row $i$ of the matrix $\mathbf {E}$ refers to a specific entity $\mathbf {e}_i$ and is updated after each recipe step $t$ as follows:
where $\mathbf {s}_{t}$ denotes the embedding of recipe step $t$ and $\mathbf {\phi }_{i,t}=(\mathbf {h}_{i,t},\mathbf {e}_{i,t})$ is the cell state of the R-RNN at step $t$ with $\mathbf {h}_{i,t}$ and $\mathbf {e}_{i,t}$ being the $i$-th row of the hidden state of the R-RNN and the dynamic representation of entity $\mathbf {e}_{i}$ at the step $t$, respectively. The R-RNN model exploits a multi-headed self-attention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase.
In Fig. FIGREF14, we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe. In particular, the attention matrix at a specific time shows the attention flow from one entity (memory cell) to another along with the attention weights to the corresponding recipe step (offset column). The color intensity shows the magnitude of the attention weights. As can be seen from the figure, the internal representations of the entities are actively updated at each step. Moreover, as argued in BIBREF5, this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others. Here, we should note that it is often difficult to make sense of these attention weights. However, we observe that the attention matrix changes very gradually near the completion of the recipe.
Procedural Reasoning Networks ::: Attention Module
Attention module is in charge of linking the question with the recipe text and the entities present in the recipe. It takes the matrices $\mathbf {Q^{\prime }}$ and $\mathbf {R}^{\prime }$ from the input module, and $\mathbf {E}$ from the reasoning module and constructs the question-aware recipe representation $\mathbf {G}$ and the question-aware entity representation $\mathbf {Y}$. Following the attention flow mechanism described in BIBREF14, we specifically calculate attentions in four different directions: (1) from question to recipe, (2) from recipe to question, (3) from question to entities, and (4) from entities to question.
The first two of these attentions require computing a shared affinity matrix $\mathbf {S}^R \in \mathbb {R}^{N \times M}$ with $\mathbf {S}^R_{i,j}$ indicating the similarity between $i$-th recipe word and $j$-th image in the question estimated by
where $\mathbf {w}^{\top }_{R}$ is a trainable weight vector, $\circ $ and $[;]$ denote elementwise multiplication and concatenation operations, respectively.
Recipe-to-question attention determines the images within the question that is most relevant to each word of the recipe. Let $\mathbf {\tilde{Q}} \in \mathbb {R}^{2d \times N}$ represent the recipe-to-question attention matrix with its $i$-th column being given by $ \mathbf {\tilde{Q}}_i=\sum _j \mathbf {a}_{ij}\mathbf {Q}^{\prime }_j$ where the attention weight is computed by $\mathbf {a}_i=\operatorname{softmax}(\mathbf {S}^R_{i}) \in \mathbb {R}^M$.
Question-to-recipe attention signifies the words within the recipe that have the closest similarity to each image in the question, and construct an attended recipe vector given by $ \tilde{\mathbf {r}}=\sum _{i}\mathbf {b}_i\mathbf {R}^{\prime }_i$ with the attention weight is calculated by $\mathbf {b}=\operatorname{softmax}(\operatorname{max}_{\mathit {col}}(\mathbf {S}^R)) \in \mathbb {R}^{N}$ where $\operatorname{max}_{\mathit {col}}$ denotes the maximum function across the column. The question-to-recipe matrix is then obtained by replicating $\tilde{\mathbf {r}}$ $N$ times across the column, giving $\tilde{\mathbf {R}} \in \mathbb {R}^{2d \times N}$.
Then, we construct the question aware representation of the input recipe, $\mathbf {G}$, with its $i$-th column $\mathbf {G}_i \in \mathbb {R}^{8d \times N}$ denoting the final embedding of $i$-th word given by
Attentions from question to entities, and from entities to question are computed in a way similar to the ones described above. The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities $\mathbf {E}$ and the question $\mathbf {Q}^{\prime }$. These attentions are then used to construct the question aware representation of entities, denoted by $\mathbf {Y}$, that links and integrates the images in the question and the entities in the input recipe.
Procedural Reasoning Networks ::: Modeling Module
Modeling module takes the question-aware representations of the recipe $\mathbf {G}$ and the entities $\mathbf {Y}$, and forms their combined vector representation. For this purpose, we first use a two-layer BiLSTM to read the question-aware recipe $\mathbf {G}$ and to encode the interactions among the words conditioned on the question. For each direction of BiLSTM , we use its hidden state after reading the last token as its output. In the end, we obtain a vector embedding $\mathbf {c} \in \mathbb {R}^{2d \times 1}$. Similarly, we employ a second BiLSTM, this time, over the entities $\mathbf {Y}$, which results in another vector embedding $\mathbf {f} \in \mathbb {R}^{2d_E \times 1}$. Finally, these vector representations are concatenated and then projected to a fixed size representation using $\mathbf {o}=\varphi _o(\left[\mathbf {c}; \mathbf {f}\right]) \in \mathbb {R}^{2d \times 1}$ where $\varphi _o$ is a multilayer perceptron with $\operatorname{tanh}$ activation function.
Procedural Reasoning Networks ::: Output Module
The output module takes the output of the modeling module, encoding vector embeddings of the question-aware recipe and the entities $\mathbf {Y}$, and the embedding of the answer $\mathbf {A}$, and returns a similarity score which is used while determining the correct answer. Among all the candidate answer, the one having the highest similarity score is chosen as the correct answer. To train our proposed procedural reasoning network, we employ a hinge ranking loss BIBREF15, similar to the one used in BIBREF2, given below.
where $\gamma $ is the margin parameter, $\mathbf {a}_+$ and $\mathbf {a}_{-}$ are the correct and the incorrect answers, respectively.
Experiments
In this section, we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks (PRN) model.
Experiments ::: Entity Extraction
Given a recipe, we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients. While determining the ingredients, we exploit Recipe1M BIBREF16 and Kaggle What’s Cooking Recipes BIBREF17 datasets, and form our dictionary using the most commonly used ingredients in the training set of RecipeQA. For the cases when no entity can be extracted from the recipe automatically (20 recipes in total), we manually annotate those recipes with the related entities.
Experiments ::: Training Details
In our experiments, we separately trained models on each task, as well as we investigated multi-task learning where a single model is trained to solve all these tasks at once. In total, the PRN architecture consists of $\sim $12M trainable parameters. We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6. We used Adam optimizer with a learning rate of 1e-4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve. We considered a batch size of 32 due to our hardware constraints. In the multi-task setting, batches are sampled round-robin from all tasks, where each batch is solely composed of examples from one task. We performed our experiments on a system containing four NVIDIA GTX-1080Ti GPUs, and training a single model took around 2 hours. We employed the same hyperparameters for all the baseline systems. We plan to share our code and model implementation after the review process.
Experiments ::: Baselines
We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.
Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.
Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.
BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.
BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates.
Experiments ::: Results
Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.
In Fig. FIGREF28, we illustrate the entity embeddings space by projecting the learned embeddings from the step-by-step memory snapshots through time with t-SNE to 3-d space from 200-d vector space. Color codes denote the categories of the cooking recipes. As can be seen, these step-aware embeddings show clear clustering of these categories. Moreover, within each cluster, the entities are grouped together in terms of their state characteristics. For instance, in the zoomed parts of the figure, chopped and sliced, or stirred and whisked entities are placed close to each other.
Fig. FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step. Here, we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data. This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entity's state in the corresponding step.
Related Work
In recent years, tracking entities and their state changes have been explored in the literature from a variety of perspectives. In an early work, BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text. BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations. BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state. The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update.
BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state. BIBREF25 has shown the importance of coreference signals for reading comprehension task. More recently, BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks. On language modeling task, BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling, coreference resolution, and entity prediction.
Our work builds upon and contributes to the growing literature on tracking states changes in procedural text. BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe. BIBREF1 proposed another entity-aware model to track entity states in scientific processes. BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes. In a follow-up work, BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts. BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the ever-changing entities states. As discussed in the introduction, however, these previous methods use a strong inductive bias and assume that state labels are present during training. In our study, we deliberately focus on unlabeled procedural data and ask the question: Can multimodality help to identify and provide insights to understanding state changes.
Conclusion
We have presented a new neural architecture called Procedural Reasoning Networks (PRN) for multimodal understanding of step-by-step instructions. Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure. Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models, indicating that it better understands the procedural text and the accompanying images. Additionally, we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entity-level supervision. Although we achieve state-of-the-art results on RecipeQA, clearly there is still room for improvement compared to human performance. We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks.
Acknowledgements
We thank the anonymous reviewers and area chairs for their invaluable feedback. This work was supported by TUBA GEBIP fellowship awarded to E. Erdem; and by the MMVC project via an Institutional Links grant (Project No. 217E054) under the Newton-Katip Çelebi Fund partnership funded by the Scientific and Technological Research Council of Turkey (TUBITAK) and the British Council. We also thank NVIDIA Corporation for the donation of GPUs used in this research. | images and text |
5d83b073635f5fd8cd1bdb1895d3f13406583fbd | 5d83b073635f5fd8cd1bdb1895d3f13406583fbd_0 | Q: What are previously reported models?
Text: Introduction
A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.
Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain.
In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps.
Visual Reasoning in RecipeQA
In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.
Visual Cloze. In the visual cloze task, the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder. A model should select the correct one from a multiple-choice list of four answer candidates to fill in the missing piece. In that regard, the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities.
Visual Coherence. The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe. To succeed in this task, a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision.
Visual Ordering. The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text. The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order. Here, a model needs to infer the temporal relations between the images and align them with the recipe steps.
Procedural Reasoning Networks
In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.
Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders.
Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5.
Attention Module computes context-aware query vectors and query-aware context vectors as well as query-aware memory vectors.
Modeling Module employs two multi-layered RNNs to encode previous layers outputs.
Output Module scores a candidate answer from the given multiple-choice list.
At a high level, as the model is reading the cooking recipe, it continually updates the internal memory representations of the entities (ingredients) based on the content of each step – it keeps track of changes in the states of the entities, providing an entity-centric summary of the recipe. The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities. All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary.
Procedural Reasoning Networks ::: Input Module
Let the triple $(\mathbf {R},\mathbf {Q},\mathbf {A})$ be a sample input. Here, $\mathbf {R}$ denotes the input recipe which contains textual instructions composed of $N$ words in total. $\mathbf {Q}$ represents the question that consists of a sequence of $M$ images. $\mathbf {A}$ denotes an answer that is either a single image or a series of $L$ images depending on the reasoning task. In particular, for the visual cloze and the visual coherence type questions, the answer contains a single image ($L=1$) and for the visual ordering task, it includes a sequence.
We encode the input recipe $\mathbf {R}$ at character, word, and step levels. Character-level embedding layer uses a convolutional neural network, namely CharCNN model by BIBREF7, which outputs character level embeddings for each word and alleviates the issue of out-of-vocabulary (OOV) words. In word embedding layer, we use a pretrained GloVe model BIBREF8 and extract word-level embeddings. The concatenation of the character and the word embeddings are then fed to a two-layer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe. This results in the matrix $\mathbf {R}^{\prime } \in \mathbb {R}^{2d \times N}$.
On top of these layers, we have another layer that encodes the steps of the recipe in an individual manner. Specifically, we obtain a step-level contextual embedding of the input recipe containing $T$ steps as $\mathcal {S}=(\mathbf {s}_1,\mathbf {s}_2,\dots ,\mathbf {s}_T)$ where $\mathbf {s}_i$ represents the final state of a BiLSTM encoding the $i$-th step of the recipe obtained from the character and word-level embeddings of the tokens exist in the corresponding step.
We represent both the question $\mathbf {Q}$ and the answer $\mathbf {A}$ in terms of visual embeddings. Here, we employ a pretrained ResNet-50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a real-valued 2048-d vector using features from the penultimate average-pool layer. Then these embeddings are passed first to a multilayer perceptron (MLP) and then its outputs are fed to a BiLSTM. We then form a matrix $\mathbf {Q}^{\prime } \in \mathbb {R}^{2d \times M}$ for the question by concatenating the cell states of the BiLSTM. For the visual ordering task, to represent the sequence of images in the answer with a single vector, we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM. Finally, for all tasks, these computations produce answer embeddings denoted by $\mathbf {a} \in \mathbb {R}^{2d \times 1}$.
Procedural Reasoning Networks ::: Reasoning Module
As mentioned before, comprehending a cooking recipe is mostly about entities (basic ingredients) and actions (cooking activities) described in the recipe instructions. Each action leads to changes in the states of the entities, which usually affects their visual characteristics. A change rarely occurs in isolation; in most cases, the action affects multiple entities at once. Hence, in our reasoning module, we have an explicit memory component implemented with relational memory units BIBREF5. This helps us to keep track of the entities, their state changes and their relations in relation to each other over the course of the recipe (see Fig. FIGREF14). As we will examine in more detail in Section SECREF4, it also greatly improves the interpretability of model outputs.
Specifically, we set up the memory with a memory matrix $\mathbf {E} \in \mathbb {R}^{d_E \times K}$ by extracting $K$ entities (ingredients) from the first step of the recipe. We initialize each memory cell $\mathbf {e}_i$ representing a specific entity by its CharCNN and pre-trained GloVe embeddings. From now on, we will use the terms memory cells and entities interchangeably throughout the paper. Since the input recipe is given in the form of a procedural text decomposed into a number of steps, we update the memory cells after each step, reflecting the state changes happened on the entities. This update procedure is modelled via a relational recurrent neural network (R-RNN), recently proposed by BIBREF5. It is built on a 2-dimensional LSTM model whose matrix of cell states represent our memory matrix $\mathbf {E}$. Here, each row $i$ of the matrix $\mathbf {E}$ refers to a specific entity $\mathbf {e}_i$ and is updated after each recipe step $t$ as follows:
where $\mathbf {s}_{t}$ denotes the embedding of recipe step $t$ and $\mathbf {\phi }_{i,t}=(\mathbf {h}_{i,t},\mathbf {e}_{i,t})$ is the cell state of the R-RNN at step $t$ with $\mathbf {h}_{i,t}$ and $\mathbf {e}_{i,t}$ being the $i$-th row of the hidden state of the R-RNN and the dynamic representation of entity $\mathbf {e}_{i}$ at the step $t$, respectively. The R-RNN model exploits a multi-headed self-attention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase.
In Fig. FIGREF14, we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe. In particular, the attention matrix at a specific time shows the attention flow from one entity (memory cell) to another along with the attention weights to the corresponding recipe step (offset column). The color intensity shows the magnitude of the attention weights. As can be seen from the figure, the internal representations of the entities are actively updated at each step. Moreover, as argued in BIBREF5, this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others. Here, we should note that it is often difficult to make sense of these attention weights. However, we observe that the attention matrix changes very gradually near the completion of the recipe.
Procedural Reasoning Networks ::: Attention Module
Attention module is in charge of linking the question with the recipe text and the entities present in the recipe. It takes the matrices $\mathbf {Q^{\prime }}$ and $\mathbf {R}^{\prime }$ from the input module, and $\mathbf {E}$ from the reasoning module and constructs the question-aware recipe representation $\mathbf {G}$ and the question-aware entity representation $\mathbf {Y}$. Following the attention flow mechanism described in BIBREF14, we specifically calculate attentions in four different directions: (1) from question to recipe, (2) from recipe to question, (3) from question to entities, and (4) from entities to question.
The first two of these attentions require computing a shared affinity matrix $\mathbf {S}^R \in \mathbb {R}^{N \times M}$ with $\mathbf {S}^R_{i,j}$ indicating the similarity between $i$-th recipe word and $j$-th image in the question estimated by
where $\mathbf {w}^{\top }_{R}$ is a trainable weight vector, $\circ $ and $[;]$ denote elementwise multiplication and concatenation operations, respectively.
Recipe-to-question attention determines the images within the question that is most relevant to each word of the recipe. Let $\mathbf {\tilde{Q}} \in \mathbb {R}^{2d \times N}$ represent the recipe-to-question attention matrix with its $i$-th column being given by $ \mathbf {\tilde{Q}}_i=\sum _j \mathbf {a}_{ij}\mathbf {Q}^{\prime }_j$ where the attention weight is computed by $\mathbf {a}_i=\operatorname{softmax}(\mathbf {S}^R_{i}) \in \mathbb {R}^M$.
Question-to-recipe attention signifies the words within the recipe that have the closest similarity to each image in the question, and construct an attended recipe vector given by $ \tilde{\mathbf {r}}=\sum _{i}\mathbf {b}_i\mathbf {R}^{\prime }_i$ with the attention weight is calculated by $\mathbf {b}=\operatorname{softmax}(\operatorname{max}_{\mathit {col}}(\mathbf {S}^R)) \in \mathbb {R}^{N}$ where $\operatorname{max}_{\mathit {col}}$ denotes the maximum function across the column. The question-to-recipe matrix is then obtained by replicating $\tilde{\mathbf {r}}$ $N$ times across the column, giving $\tilde{\mathbf {R}} \in \mathbb {R}^{2d \times N}$.
Then, we construct the question aware representation of the input recipe, $\mathbf {G}$, with its $i$-th column $\mathbf {G}_i \in \mathbb {R}^{8d \times N}$ denoting the final embedding of $i$-th word given by
Attentions from question to entities, and from entities to question are computed in a way similar to the ones described above. The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities $\mathbf {E}$ and the question $\mathbf {Q}^{\prime }$. These attentions are then used to construct the question aware representation of entities, denoted by $\mathbf {Y}$, that links and integrates the images in the question and the entities in the input recipe.
Procedural Reasoning Networks ::: Modeling Module
Modeling module takes the question-aware representations of the recipe $\mathbf {G}$ and the entities $\mathbf {Y}$, and forms their combined vector representation. For this purpose, we first use a two-layer BiLSTM to read the question-aware recipe $\mathbf {G}$ and to encode the interactions among the words conditioned on the question. For each direction of BiLSTM , we use its hidden state after reading the last token as its output. In the end, we obtain a vector embedding $\mathbf {c} \in \mathbb {R}^{2d \times 1}$. Similarly, we employ a second BiLSTM, this time, over the entities $\mathbf {Y}$, which results in another vector embedding $\mathbf {f} \in \mathbb {R}^{2d_E \times 1}$. Finally, these vector representations are concatenated and then projected to a fixed size representation using $\mathbf {o}=\varphi _o(\left[\mathbf {c}; \mathbf {f}\right]) \in \mathbb {R}^{2d \times 1}$ where $\varphi _o$ is a multilayer perceptron with $\operatorname{tanh}$ activation function.
Procedural Reasoning Networks ::: Output Module
The output module takes the output of the modeling module, encoding vector embeddings of the question-aware recipe and the entities $\mathbf {Y}$, and the embedding of the answer $\mathbf {A}$, and returns a similarity score which is used while determining the correct answer. Among all the candidate answer, the one having the highest similarity score is chosen as the correct answer. To train our proposed procedural reasoning network, we employ a hinge ranking loss BIBREF15, similar to the one used in BIBREF2, given below.
where $\gamma $ is the margin parameter, $\mathbf {a}_+$ and $\mathbf {a}_{-}$ are the correct and the incorrect answers, respectively.
Experiments
In this section, we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks (PRN) model.
Experiments ::: Entity Extraction
Given a recipe, we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients. While determining the ingredients, we exploit Recipe1M BIBREF16 and Kaggle What’s Cooking Recipes BIBREF17 datasets, and form our dictionary using the most commonly used ingredients in the training set of RecipeQA. For the cases when no entity can be extracted from the recipe automatically (20 recipes in total), we manually annotate those recipes with the related entities.
Experiments ::: Training Details
In our experiments, we separately trained models on each task, as well as we investigated multi-task learning where a single model is trained to solve all these tasks at once. In total, the PRN architecture consists of $\sim $12M trainable parameters. We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6. We used Adam optimizer with a learning rate of 1e-4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve. We considered a batch size of 32 due to our hardware constraints. In the multi-task setting, batches are sampled round-robin from all tasks, where each batch is solely composed of examples from one task. We performed our experiments on a system containing four NVIDIA GTX-1080Ti GPUs, and training a single model took around 2 hours. We employed the same hyperparameters for all the baseline systems. We plan to share our code and model implementation after the review process.
Experiments ::: Baselines
We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.
Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.
Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.
BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.
BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates.
Experiments ::: Results
Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.
In Fig. FIGREF28, we illustrate the entity embeddings space by projecting the learned embeddings from the step-by-step memory snapshots through time with t-SNE to 3-d space from 200-d vector space. Color codes denote the categories of the cooking recipes. As can be seen, these step-aware embeddings show clear clustering of these categories. Moreover, within each cluster, the entities are grouped together in terms of their state characteristics. For instance, in the zoomed parts of the figure, chopped and sliced, or stirred and whisked entities are placed close to each other.
Fig. FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step. Here, we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data. This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entity's state in the corresponding step.
Related Work
In recent years, tracking entities and their state changes have been explored in the literature from a variety of perspectives. In an early work, BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text. BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations. BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state. The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update.
BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state. BIBREF25 has shown the importance of coreference signals for reading comprehension task. More recently, BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks. On language modeling task, BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling, coreference resolution, and entity prediction.
Our work builds upon and contributes to the growing literature on tracking states changes in procedural text. BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe. BIBREF1 proposed another entity-aware model to track entity states in scientific processes. BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes. In a follow-up work, BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts. BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the ever-changing entities states. As discussed in the introduction, however, these previous methods use a strong inductive bias and assume that state labels are present during training. In our study, we deliberately focus on unlabeled procedural data and ask the question: Can multimodality help to identify and provide insights to understanding state changes.
Conclusion
We have presented a new neural architecture called Procedural Reasoning Networks (PRN) for multimodal understanding of step-by-step instructions. Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure. Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models, indicating that it better understands the procedural text and the accompanying images. Additionally, we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entity-level supervision. Although we achieve state-of-the-art results on RecipeQA, clearly there is still room for improvement compared to human performance. We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks.
Acknowledgements
We thank the anonymous reviewers and area chairs for their invaluable feedback. This work was supported by TUBA GEBIP fellowship awarded to E. Erdem; and by the MMVC project via an Institutional Links grant (Project No. 217E054) under the Newton-Katip Çelebi Fund partnership funded by the Scientific and Technological Research Council of Turkey (TUBITAK) and the British Council. We also thank NVIDIA Corporation for the donation of GPUs used in this research. | Hasty Student, Impatient Reader, BiDAF, BiDAF w/ static memory |
171ebfdc9b3a98e4cdee8f8715003285caeb2f39 | 171ebfdc9b3a98e4cdee8f8715003285caeb2f39_0 | Q: How better is accuracy of new model compared to previously reported models?
Text: Introduction
A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.
Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain.
In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps.
Visual Reasoning in RecipeQA
In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.
Visual Cloze. In the visual cloze task, the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder. A model should select the correct one from a multiple-choice list of four answer candidates to fill in the missing piece. In that regard, the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities.
Visual Coherence. The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe. To succeed in this task, a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision.
Visual Ordering. The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text. The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order. Here, a model needs to infer the temporal relations between the images and align them with the recipe steps.
Procedural Reasoning Networks
In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.
Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders.
Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5.
Attention Module computes context-aware query vectors and query-aware context vectors as well as query-aware memory vectors.
Modeling Module employs two multi-layered RNNs to encode previous layers outputs.
Output Module scores a candidate answer from the given multiple-choice list.
At a high level, as the model is reading the cooking recipe, it continually updates the internal memory representations of the entities (ingredients) based on the content of each step – it keeps track of changes in the states of the entities, providing an entity-centric summary of the recipe. The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities. All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary.
Procedural Reasoning Networks ::: Input Module
Let the triple $(\mathbf {R},\mathbf {Q},\mathbf {A})$ be a sample input. Here, $\mathbf {R}$ denotes the input recipe which contains textual instructions composed of $N$ words in total. $\mathbf {Q}$ represents the question that consists of a sequence of $M$ images. $\mathbf {A}$ denotes an answer that is either a single image or a series of $L$ images depending on the reasoning task. In particular, for the visual cloze and the visual coherence type questions, the answer contains a single image ($L=1$) and for the visual ordering task, it includes a sequence.
We encode the input recipe $\mathbf {R}$ at character, word, and step levels. Character-level embedding layer uses a convolutional neural network, namely CharCNN model by BIBREF7, which outputs character level embeddings for each word and alleviates the issue of out-of-vocabulary (OOV) words. In word embedding layer, we use a pretrained GloVe model BIBREF8 and extract word-level embeddings. The concatenation of the character and the word embeddings are then fed to a two-layer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe. This results in the matrix $\mathbf {R}^{\prime } \in \mathbb {R}^{2d \times N}$.
On top of these layers, we have another layer that encodes the steps of the recipe in an individual manner. Specifically, we obtain a step-level contextual embedding of the input recipe containing $T$ steps as $\mathcal {S}=(\mathbf {s}_1,\mathbf {s}_2,\dots ,\mathbf {s}_T)$ where $\mathbf {s}_i$ represents the final state of a BiLSTM encoding the $i$-th step of the recipe obtained from the character and word-level embeddings of the tokens exist in the corresponding step.
We represent both the question $\mathbf {Q}$ and the answer $\mathbf {A}$ in terms of visual embeddings. Here, we employ a pretrained ResNet-50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a real-valued 2048-d vector using features from the penultimate average-pool layer. Then these embeddings are passed first to a multilayer perceptron (MLP) and then its outputs are fed to a BiLSTM. We then form a matrix $\mathbf {Q}^{\prime } \in \mathbb {R}^{2d \times M}$ for the question by concatenating the cell states of the BiLSTM. For the visual ordering task, to represent the sequence of images in the answer with a single vector, we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM. Finally, for all tasks, these computations produce answer embeddings denoted by $\mathbf {a} \in \mathbb {R}^{2d \times 1}$.
Procedural Reasoning Networks ::: Reasoning Module
As mentioned before, comprehending a cooking recipe is mostly about entities (basic ingredients) and actions (cooking activities) described in the recipe instructions. Each action leads to changes in the states of the entities, which usually affects their visual characteristics. A change rarely occurs in isolation; in most cases, the action affects multiple entities at once. Hence, in our reasoning module, we have an explicit memory component implemented with relational memory units BIBREF5. This helps us to keep track of the entities, their state changes and their relations in relation to each other over the course of the recipe (see Fig. FIGREF14). As we will examine in more detail in Section SECREF4, it also greatly improves the interpretability of model outputs.
Specifically, we set up the memory with a memory matrix $\mathbf {E} \in \mathbb {R}^{d_E \times K}$ by extracting $K$ entities (ingredients) from the first step of the recipe. We initialize each memory cell $\mathbf {e}_i$ representing a specific entity by its CharCNN and pre-trained GloVe embeddings. From now on, we will use the terms memory cells and entities interchangeably throughout the paper. Since the input recipe is given in the form of a procedural text decomposed into a number of steps, we update the memory cells after each step, reflecting the state changes happened on the entities. This update procedure is modelled via a relational recurrent neural network (R-RNN), recently proposed by BIBREF5. It is built on a 2-dimensional LSTM model whose matrix of cell states represent our memory matrix $\mathbf {E}$. Here, each row $i$ of the matrix $\mathbf {E}$ refers to a specific entity $\mathbf {e}_i$ and is updated after each recipe step $t$ as follows:
where $\mathbf {s}_{t}$ denotes the embedding of recipe step $t$ and $\mathbf {\phi }_{i,t}=(\mathbf {h}_{i,t},\mathbf {e}_{i,t})$ is the cell state of the R-RNN at step $t$ with $\mathbf {h}_{i,t}$ and $\mathbf {e}_{i,t}$ being the $i$-th row of the hidden state of the R-RNN and the dynamic representation of entity $\mathbf {e}_{i}$ at the step $t$, respectively. The R-RNN model exploits a multi-headed self-attention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase.
In Fig. FIGREF14, we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe. In particular, the attention matrix at a specific time shows the attention flow from one entity (memory cell) to another along with the attention weights to the corresponding recipe step (offset column). The color intensity shows the magnitude of the attention weights. As can be seen from the figure, the internal representations of the entities are actively updated at each step. Moreover, as argued in BIBREF5, this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others. Here, we should note that it is often difficult to make sense of these attention weights. However, we observe that the attention matrix changes very gradually near the completion of the recipe.
Procedural Reasoning Networks ::: Attention Module
Attention module is in charge of linking the question with the recipe text and the entities present in the recipe. It takes the matrices $\mathbf {Q^{\prime }}$ and $\mathbf {R}^{\prime }$ from the input module, and $\mathbf {E}$ from the reasoning module and constructs the question-aware recipe representation $\mathbf {G}$ and the question-aware entity representation $\mathbf {Y}$. Following the attention flow mechanism described in BIBREF14, we specifically calculate attentions in four different directions: (1) from question to recipe, (2) from recipe to question, (3) from question to entities, and (4) from entities to question.
The first two of these attentions require computing a shared affinity matrix $\mathbf {S}^R \in \mathbb {R}^{N \times M}$ with $\mathbf {S}^R_{i,j}$ indicating the similarity between $i$-th recipe word and $j$-th image in the question estimated by
where $\mathbf {w}^{\top }_{R}$ is a trainable weight vector, $\circ $ and $[;]$ denote elementwise multiplication and concatenation operations, respectively.
Recipe-to-question attention determines the images within the question that is most relevant to each word of the recipe. Let $\mathbf {\tilde{Q}} \in \mathbb {R}^{2d \times N}$ represent the recipe-to-question attention matrix with its $i$-th column being given by $ \mathbf {\tilde{Q}}_i=\sum _j \mathbf {a}_{ij}\mathbf {Q}^{\prime }_j$ where the attention weight is computed by $\mathbf {a}_i=\operatorname{softmax}(\mathbf {S}^R_{i}) \in \mathbb {R}^M$.
Question-to-recipe attention signifies the words within the recipe that have the closest similarity to each image in the question, and construct an attended recipe vector given by $ \tilde{\mathbf {r}}=\sum _{i}\mathbf {b}_i\mathbf {R}^{\prime }_i$ with the attention weight is calculated by $\mathbf {b}=\operatorname{softmax}(\operatorname{max}_{\mathit {col}}(\mathbf {S}^R)) \in \mathbb {R}^{N}$ where $\operatorname{max}_{\mathit {col}}$ denotes the maximum function across the column. The question-to-recipe matrix is then obtained by replicating $\tilde{\mathbf {r}}$ $N$ times across the column, giving $\tilde{\mathbf {R}} \in \mathbb {R}^{2d \times N}$.
Then, we construct the question aware representation of the input recipe, $\mathbf {G}$, with its $i$-th column $\mathbf {G}_i \in \mathbb {R}^{8d \times N}$ denoting the final embedding of $i$-th word given by
Attentions from question to entities, and from entities to question are computed in a way similar to the ones described above. The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities $\mathbf {E}$ and the question $\mathbf {Q}^{\prime }$. These attentions are then used to construct the question aware representation of entities, denoted by $\mathbf {Y}$, that links and integrates the images in the question and the entities in the input recipe.
Procedural Reasoning Networks ::: Modeling Module
Modeling module takes the question-aware representations of the recipe $\mathbf {G}$ and the entities $\mathbf {Y}$, and forms their combined vector representation. For this purpose, we first use a two-layer BiLSTM to read the question-aware recipe $\mathbf {G}$ and to encode the interactions among the words conditioned on the question. For each direction of BiLSTM , we use its hidden state after reading the last token as its output. In the end, we obtain a vector embedding $\mathbf {c} \in \mathbb {R}^{2d \times 1}$. Similarly, we employ a second BiLSTM, this time, over the entities $\mathbf {Y}$, which results in another vector embedding $\mathbf {f} \in \mathbb {R}^{2d_E \times 1}$. Finally, these vector representations are concatenated and then projected to a fixed size representation using $\mathbf {o}=\varphi _o(\left[\mathbf {c}; \mathbf {f}\right]) \in \mathbb {R}^{2d \times 1}$ where $\varphi _o$ is a multilayer perceptron with $\operatorname{tanh}$ activation function.
Procedural Reasoning Networks ::: Output Module
The output module takes the output of the modeling module, encoding vector embeddings of the question-aware recipe and the entities $\mathbf {Y}$, and the embedding of the answer $\mathbf {A}$, and returns a similarity score which is used while determining the correct answer. Among all the candidate answer, the one having the highest similarity score is chosen as the correct answer. To train our proposed procedural reasoning network, we employ a hinge ranking loss BIBREF15, similar to the one used in BIBREF2, given below.
where $\gamma $ is the margin parameter, $\mathbf {a}_+$ and $\mathbf {a}_{-}$ are the correct and the incorrect answers, respectively.
Experiments
In this section, we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks (PRN) model.
Experiments ::: Entity Extraction
Given a recipe, we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients. While determining the ingredients, we exploit Recipe1M BIBREF16 and Kaggle What’s Cooking Recipes BIBREF17 datasets, and form our dictionary using the most commonly used ingredients in the training set of RecipeQA. For the cases when no entity can be extracted from the recipe automatically (20 recipes in total), we manually annotate those recipes with the related entities.
Experiments ::: Training Details
In our experiments, we separately trained models on each task, as well as we investigated multi-task learning where a single model is trained to solve all these tasks at once. In total, the PRN architecture consists of $\sim $12M trainable parameters. We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6. We used Adam optimizer with a learning rate of 1e-4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve. We considered a batch size of 32 due to our hardware constraints. In the multi-task setting, batches are sampled round-robin from all tasks, where each batch is solely composed of examples from one task. We performed our experiments on a system containing four NVIDIA GTX-1080Ti GPUs, and training a single model took around 2 hours. We employed the same hyperparameters for all the baseline systems. We plan to share our code and model implementation after the review process.
Experiments ::: Baselines
We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.
Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.
Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.
BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.
BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates.
Experiments ::: Results
Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.
In Fig. FIGREF28, we illustrate the entity embeddings space by projecting the learned embeddings from the step-by-step memory snapshots through time with t-SNE to 3-d space from 200-d vector space. Color codes denote the categories of the cooking recipes. As can be seen, these step-aware embeddings show clear clustering of these categories. Moreover, within each cluster, the entities are grouped together in terms of their state characteristics. For instance, in the zoomed parts of the figure, chopped and sliced, or stirred and whisked entities are placed close to each other.
Fig. FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step. Here, we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data. This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entity's state in the corresponding step.
Related Work
In recent years, tracking entities and their state changes have been explored in the literature from a variety of perspectives. In an early work, BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text. BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations. BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state. The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update.
BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state. BIBREF25 has shown the importance of coreference signals for reading comprehension task. More recently, BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks. On language modeling task, BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling, coreference resolution, and entity prediction.
Our work builds upon and contributes to the growing literature on tracking states changes in procedural text. BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe. BIBREF1 proposed another entity-aware model to track entity states in scientific processes. BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes. In a follow-up work, BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts. BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the ever-changing entities states. As discussed in the introduction, however, these previous methods use a strong inductive bias and assume that state labels are present during training. In our study, we deliberately focus on unlabeled procedural data and ask the question: Can multimodality help to identify and provide insights to understanding state changes.
Conclusion
We have presented a new neural architecture called Procedural Reasoning Networks (PRN) for multimodal understanding of step-by-step instructions. Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure. Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models, indicating that it better understands the procedural text and the accompanying images. Additionally, we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entity-level supervision. Although we achieve state-of-the-art results on RecipeQA, clearly there is still room for improvement compared to human performance. We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks.
Acknowledgements
We thank the anonymous reviewers and area chairs for their invaluable feedback. This work was supported by TUBA GEBIP fellowship awarded to E. Erdem; and by the MMVC project via an Institutional Links grant (Project No. 217E054) under the Newton-Katip Çelebi Fund partnership funded by the Scientific and Technological Research Council of Turkey (TUBITAK) and the British Council. We also thank NVIDIA Corporation for the donation of GPUs used in this research. | Average accuracy of proposed model vs best prevous result:
Single-task Training: 57.57 vs 55.06
Multi-task Training: 50.17 vs 50.59 |
3c3cb51093b5fd163e87a773a857496a4ae71f03 | 3c3cb51093b5fd163e87a773a857496a4ae71f03_0 | Q: How does the scoring model work?
Text: Introduction
Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.
CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains.
One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks.
Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter.
To sum up, the main contributions of our work are summarized as follows:
The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work.
Chinese Word Segmentation
In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .
Supervised Learning Methods. Initially, supervised learning methods were widely-used in CWS. Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters. Zhao et al. BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection. However, these methods greatly rely on manual feature engineering BIBREF17 , while handcrafted features are difficult to design, and the size of these features is usually very large BIBREF6 .
Deep Learning Methods. Recently, neural networks have been applied in CWS tasks. To name a few, Zheng et al. BIBREF14 used deep layers of neural networks to learn feature representations of characters. Chen et al. BIBREF6 adopted LSTM to capture the previous important information. Chen et al. BIBREF18 proposed a gated recursive neural network (GRNN), which contains reset and update gates to incorporate the complicated combinations of characters. Jiang and Tang BIBREF19 proposed a sequence-to-sequence transformer model to avoid overfitting and capture character information at the distant site of a sentence. Yang et al. BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM (LaLSTM) network. However, general word segmentation models do not work well in specific field due to lack of annotated training data.
Currently, a handful of domain-specific CWS approaches have been studied, but they focused on decentralized domains. In the metallurgical field, Shao et al. BIBREF15 proposed a domain-specific CWS method based on Bi-LSTM model. In the medical field, Xing et al. BIBREF8 proposed an adaptive multi-task transfer learning framework to fully leverage domain-invariant knowledge from high resource domain to medical domain. Meanwhile, transfer learning still greatly focuses on the corpus in general domain. When it comes to the specific domain, large amounts of manually-annotated data is necessary. Active learning can solve this problem to a certain extent. However, due to the challenges faced by performing active learning on CWS, only a few studies have been conducted. On judgements, Yan et al. BIBREF21 adopted the local annotation strategy, which selects substrings around the informative characters in active learning. However, their method still stays at the statistical level. Unlike the above method, we propose an active learning approach for CWS in medical text, which combines information entropy with neural network to effectively reduce annotation cost.
Active Learning
Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .
The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes: DISPLAYFORM0
where INLINEFORM0 is a multi-dimensional feature vector, INLINEFORM1 is its binary label, and INLINEFORM2 is the predicted probability, through which a classifier trained on training sets can map features to labels. However, in some complicated tasks, such as CWS and NER, only considering the uncertainty of classifier is obviously not enough.
Active Learning for Chinese Word Segmentation
Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.
Fig. FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning. First, we train a CRF-based segmenter by train set. Then, the segmenter is employed to annotate the unlabeled set roughly. Subsequently, information entropy based scoring model picks INLINEFORM0 -lowest ranking samples for annotators to relabel. Meanwhile, the train sets and unlabeled sets are updated. Finally, we re-train the segmenter. The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold. [!ht] Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 , unlabeled data INLINEFORM2 , the number of iterations INLINEFORM3 , the number of samples selected per iteration INLINEFORM4 , partitioning function INLINEFORM5 , size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize: INLINEFORM9
train a word segmenter INLINEFORM0
estimate the test set loss INLINEFORM0
label INLINEFORM0 by INLINEFORM1
INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model
select INLINEFORM0 -lowest ranking samples INLINEFORM1
relabel INLINEFORM0 by annotators
form a new labeled dataset INLINEFORM0
form a new unlabeled dataset INLINEFORM0
train a word segmenter INLINEFORM0
estimate the new test loss INLINEFORM0
compute the loss reduction INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3
CRF-based Word Segmenter
CWS can be formalized as a sequence labeling problem with character position tags, which are (`B', `M', `E', `S'). So, we convert the labeled data into the `BMES' format, in which each character in the sequence is assigned into a label as follows one by one: B=beginning of a word, M=middle of a word, E=end of a word and S=single word.
In this paper, we use CRF as a training model for CWS task. Given the observed sequence, CRF has a single exponential model for the joint probability of the entire sequence of labels, while maximum entropy markov model (MEMM) BIBREF29 uses per-state exponential models for the conditional probabilities of next states BIBREF4 . Therefore, it can solve the label bias problem effectively. Compared with neural networks, it has less dependency on the corpus size.
First, we pre-process EHRs at the character-level, separating each character of raw EHRs. For instance, given a sentence INLINEFORM0 , where INLINEFORM1 represents the INLINEFORM2 -th character, the separated form is INLINEFORM3 . Then, we employ Word2Vec BIBREF30 to train pre-processed EHRs to get character embeddings. To capture interactions between adjacent characters, K-means clustering algorithm BIBREF31 is utilized to feature the coherence over characters. In general, K-means divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher. With each iteration, K-means can classify EHR characters into the nearest cluster based on distance to the mean vector. Then, recalculating and adjusting the mean vectors of these clusters until the mean vector converges. K-means features explicitly show the difference between two adjacent characters and even multiple characters. Finally, we additionally add K-means clustering features to the input of CRF-based segmenter. The segmenter makes positional tagging decisions over individual characters. For example, a Chinese segmented sentence UTF8gkai“病人/长期/于/我院/肾病科/住院/治疗/。/" (The patient was hospitalized for a long time in the nephrology department of our hospital.) is labeled as `BEBESBEBMEBEBES'.
Information Entropy Based Scoring Model
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
We use gated neural network and information entropy to capture the likelihood of the segment being a legal word. The architecture of word score model is depicted in Fig. FIGREF12 .
Gated Combination Neural Network (GCNN)
To effectively learn word representations through character embeddings, we use GCNN BIBREF32 . The architecture of GCNN is demonstrated in Fig. FIGREF13 , which includes update gate and reset gate. The gated mechanism not only captures the characteristics of the characters themselves, but also utilizes the interaction between the characters. There are two types of gates in this network structure: reset gates and update gates. These two gated vectors determine the final output of the gated recurrent neural network, where the update gate helps the model determine what to be passed, and the reset gate primarily helps the model decide what to be cleared. In particular, the word embedding of a word with INLINEFORM0 characters can be computed as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the i-th character INLINEFORM3 respectively, the combination vector INLINEFORM4 is formalized as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are reset gates for characters.
Left and Right Branch Information Entropy In general, each string in a sentence may be a word. However, compared with a string which is not a word, the string of a word is significantly more independent. The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string, which reflects the likelihood of a string being a word. The left and right branch information entropy can be formalized as follows: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the INLINEFORM1 -th candidate word, INLINEFORM2 denotes the character set, INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 . INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 . If the left and right branch information entropy of a candidate word is relatively high, the probability that the candidate word can be combined with the surrounded characters to form a word is low, thus the candidate word is likely to be a legal word.
To judge whether the candidate words in a segmented sentence are legal words, we compute the left and right entropy of each candidate word, then take average as the measurement standard: DISPLAYFORM0
We represent a segmented sentence with INLINEFORM0 candidate words as [ INLINEFORM1 , INLINEFORM2 ,..., INLINEFORM3 ], so the INLINEFORM4 ( INLINEFORM5 ) of the INLINEFORM6 -th candidate word is computed by its average entropy: DISPLAYFORM0
In this paper, we use LSTM to capture the coherence between words in a segmented sentence. This neural network is mainly an optimization for traditional RNN. RNN is widely used to deal with time-series prediction problems. The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 . Therefore, RNN can remember historical results. However, traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 . By adding a gated mechanism to RNN, LSTM effectively solves these problems, which motivates us to get the link score with LSTM. Formally, the LSTM unit performs the following operations at time step INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are the inputs of LSTM, all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained, and INLINEFORM5 is a set of bias parameter matrices to be trained. INLINEFORM6 and INLINEFORM7 operation respectively represent matrix element-wise multiplication and sigmoid function. In the LSTM unit, there are two hidden layers ( INLINEFORM8 , INLINEFORM9 ), where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient, while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers.
We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 , a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 : DISPLAYFORM0
Therefore, link score of next word embedding INLINEFORM0 can be computed as: DISPLAYFORM0
Due to the structure of LSTM, vector INLINEFORM0 contains important information of entire segmentation decisions. In this way, the link score gets the result of the sequence-level word segmentation, not just word-level.
Intuitively, we can compute the score of a segmented sequence by summing up word scores and link scores. However, we find that a sequence with more candidate words tends to have higher sequence scores. Therefore, to alleviate the impact of the number of candidate words on sequence scores, we calculate final scores as follows: DISPLAYFORM0
where INLINEFORM0 denotes the INLINEFORM1 -th segmented sequence with INLINEFORM2 candidate words, and INLINEFORM3 represents the INLINEFORM4 -th candidate words in the segmented sequence.
When training the model, we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence. DISPLAYFORM0
where INLINEFORM0 is the loss function.
Datasets
We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records. We choose 4 different types with a total of 3868 records from them, which are first course reports, medical records, chief ward round records and discharge records. The detailed information of EHRs are listed in Table TABREF32 .
We split our datasets as follows. First, we randomly select 3200 records from 3868 records as unlabeled set. Then, we manually annotate remaining 668 records as labeled set, which contains 1170 sentences. Finally, we divide labeled set into train set and test set with the ratio of 7:3 randomly. Statistics of datasets are listed in Table TABREF33 .
Parameter Settings
To determine suitable parameters, we divide training set into two sets, the first 80% sentences as training set and the rest 20% sentences as validation set.
Character embedding dimensions and K-means clusters are two main parameters in the CRF-based word segmenter.
In this paper, we choose character-based CRF without any features as baseline. First, we use Word2Vec to train character embeddings with dimensions of [`50', `100', `150', `200', `300', `400'] respectively, thus we obtain 6 different dimensional character embeddings. Second, these six types of character embeddings are used as the input to K-means algorithm with the number of clusters [`50', `100', `200', `300', `400', `500', `600'] respectively to capture the corresponding features of character embeddings. Then, we add K-means clustering features to baseline for training. As can be seen from Fig. FIGREF36 , when the character embedding dimension INLINEFORM0 = 150 and the number of clusters INLINEFORM1 = 400, CRF-based word segmenter performs best, so these two parameters are used in subsequent experiments.
Hyper-parameters of neural network have a great impact on the performance. The hyper-parameters we choose are listed in Table TABREF38 .
The dimension of character embeddings is set as same as the parameter used in CRF-based word segmenter and the number of hidden units is also set to be the same as it. Maximum word length is ralated to the number of parameters in GCNN unit. Since there are many long medical terminologies in EHRs, we set the maximum word length as 6. In addition, dropout is an effective way to prevent neural networks from overfitting BIBREF35 . To avoid overfitting, we drop the input layer of the scoring model with the rate of 20%.
Experimental Results
Our work experimentally compares two mainstream CWS tools (LTP and Jieba) on training and testing sets. These two tools are widely used and recognized due to their high INLINEFORM0 -score of word segmentation in general fields. However, in specific fields, there are many terminologies and uncommon words, which lead to the unsatisfactory performance of segmentation results. To solve the problem of word segmentation in specific fields, these two tools provide a custom dictionary for users. In the experiments, we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results. We manually construct the dictionary when labeling EHRs.
From the results in Table TABREF41 , we find that Jieba benefits a lot from the external dictionary. However, the Recall of LTP decreases when joining the domain dictionary. Generally speaking, since these two tools are trained by general domain corpus, the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields.
To investigate the effectiveness of K-means features in CRF-based segmenter, we also compare K-means with 3 different clustering features, including MeanShift BIBREF36 , SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets. From the results in Table TABREF43 , by adding additional clustering features in CRF-based segmenter, there is a significant improvement of INLINEFORM0 -score, which indicates that clustering features can effectively capture the semantic coherence between characters. Among these clustering features, K-means performs best, so we utlize K-means results as additional features for CRF-based segmenter.
In this experiment, since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 , we compare our proposed strategy with uncertainty sampling in active learning. We conduct our experiments as follows. First, we employ CRF-based segmenter to annotate the unlabeled set. Then, sampling strategy in active learning selects a part of samples for annotators to relabel. Finally, the relabeled samples are added to train set for segmenter to re-train. Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences, while uncertainty sampling suggests relabeling samples that are closest to the segmenter’s decision boundary.
Generally, two main parameters in active learning are the numbers of iterations and samples selected per iteration. To fairly investigate the influence of two parameters, we compare our proposed strategy with uncertainty sampling on the same parameter. We find that though the number of iterations is large enough, it has a limited impact on the performance of segmenter. Therefore, we choose 30 as the number of iterations, which is a good trade-off between speed and performance. As for the number of samples selected per iteration, there are 6078 sentences in unlabeled set, considering the high cost of relabeling, we set four sizes of samples selected per iteration, which are 2%, 5%, 8% and 11%.
The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig. FIGREF45 , where x-axis represents the number of iterations and y-axis denotes the INLINEFORM0 -score of the segmenter. Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations, indicating that scoring strategy is more capable of selecting representative samples.
Furthermore, we also investigate the relations between the best INLINEFORM0 -score and corresponding number of iteration on two sampling strategies, which is depicted in Fig. FIGREF46 .
It is observed that in our proposed scoring model, with the proportion of relabeled data increasing, the iteration number of reaching the optimal word segmentation result is decreasing, but the INLINEFORM0 -score of CRF-based word segmenter is also gradually decreasing. When the proportion is 2%, the segmenter reaches the highest INLINEFORM1 -score: 90.62%. Obviously, our proposed strategy outperforms uncertainty sampling by a large margin. Our proposed method needs only 2% relabeled samples to obtain INLINEFORM2 -score of 90.62%, while uncertainty sampling requires 8% samples to reach its best INLINEFORM3 -score of 88.98%, which indicates that with our proposed method, we only need to manually relabel a small number of samples to achieve a desired segmentation result.
Conclusion and Future Work
To relieve the efforts of EHRs annotation, we propose an effective word segmentation method based on active learning, in which the sampling strategy is a scoring model combining information entropy with neural network. Compared with the mainstream uncertainty sampling, our strategy selects samples from statistical perspective and deep learning level. In addition, to capture coherence between characters, we add K-means clustering features to CRF-based word segmenter. Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, we evaluate our method on CWS task. Compared with uncertainty sampling, our method requires 6% less relabeled samples to achieve better performance, which proves that our method can save the cost of manual annotation to a certain extent.
In future, we plan to employ other widely-used deep neural networks, such as convolutional neural network and attention mechanism, in the research of EHRs segmentation. Then, we believe that our method can be applied to other tasks as well, so we will fully investigate the application of our method in other tasks, such as NER and relation extraction.
Acknowledgment
The authors would like to appreciate any suggestions or comments from the anonymous reviewers. This work was supported by the National Natural Science Foundation of China (No. 61772201) and the National Key R&D Program of China for “Precision medical research" (No. 2018YFC0910550). | First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word |
3c3cb51093b5fd163e87a773a857496a4ae71f03 | 3c3cb51093b5fd163e87a773a857496a4ae71f03_1 | Q: How does the scoring model work?
Text: Introduction
Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.
CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains.
One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks.
Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter.
To sum up, the main contributions of our work are summarized as follows:
The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work.
Chinese Word Segmentation
In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .
Supervised Learning Methods. Initially, supervised learning methods were widely-used in CWS. Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters. Zhao et al. BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection. However, these methods greatly rely on manual feature engineering BIBREF17 , while handcrafted features are difficult to design, and the size of these features is usually very large BIBREF6 .
Deep Learning Methods. Recently, neural networks have been applied in CWS tasks. To name a few, Zheng et al. BIBREF14 used deep layers of neural networks to learn feature representations of characters. Chen et al. BIBREF6 adopted LSTM to capture the previous important information. Chen et al. BIBREF18 proposed a gated recursive neural network (GRNN), which contains reset and update gates to incorporate the complicated combinations of characters. Jiang and Tang BIBREF19 proposed a sequence-to-sequence transformer model to avoid overfitting and capture character information at the distant site of a sentence. Yang et al. BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM (LaLSTM) network. However, general word segmentation models do not work well in specific field due to lack of annotated training data.
Currently, a handful of domain-specific CWS approaches have been studied, but they focused on decentralized domains. In the metallurgical field, Shao et al. BIBREF15 proposed a domain-specific CWS method based on Bi-LSTM model. In the medical field, Xing et al. BIBREF8 proposed an adaptive multi-task transfer learning framework to fully leverage domain-invariant knowledge from high resource domain to medical domain. Meanwhile, transfer learning still greatly focuses on the corpus in general domain. When it comes to the specific domain, large amounts of manually-annotated data is necessary. Active learning can solve this problem to a certain extent. However, due to the challenges faced by performing active learning on CWS, only a few studies have been conducted. On judgements, Yan et al. BIBREF21 adopted the local annotation strategy, which selects substrings around the informative characters in active learning. However, their method still stays at the statistical level. Unlike the above method, we propose an active learning approach for CWS in medical text, which combines information entropy with neural network to effectively reduce annotation cost.
Active Learning
Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .
The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes: DISPLAYFORM0
where INLINEFORM0 is a multi-dimensional feature vector, INLINEFORM1 is its binary label, and INLINEFORM2 is the predicted probability, through which a classifier trained on training sets can map features to labels. However, in some complicated tasks, such as CWS and NER, only considering the uncertainty of classifier is obviously not enough.
Active Learning for Chinese Word Segmentation
Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.
Fig. FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning. First, we train a CRF-based segmenter by train set. Then, the segmenter is employed to annotate the unlabeled set roughly. Subsequently, information entropy based scoring model picks INLINEFORM0 -lowest ranking samples for annotators to relabel. Meanwhile, the train sets and unlabeled sets are updated. Finally, we re-train the segmenter. The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold. [!ht] Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 , unlabeled data INLINEFORM2 , the number of iterations INLINEFORM3 , the number of samples selected per iteration INLINEFORM4 , partitioning function INLINEFORM5 , size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize: INLINEFORM9
train a word segmenter INLINEFORM0
estimate the test set loss INLINEFORM0
label INLINEFORM0 by INLINEFORM1
INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model
select INLINEFORM0 -lowest ranking samples INLINEFORM1
relabel INLINEFORM0 by annotators
form a new labeled dataset INLINEFORM0
form a new unlabeled dataset INLINEFORM0
train a word segmenter INLINEFORM0
estimate the new test loss INLINEFORM0
compute the loss reduction INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3
CRF-based Word Segmenter
CWS can be formalized as a sequence labeling problem with character position tags, which are (`B', `M', `E', `S'). So, we convert the labeled data into the `BMES' format, in which each character in the sequence is assigned into a label as follows one by one: B=beginning of a word, M=middle of a word, E=end of a word and S=single word.
In this paper, we use CRF as a training model for CWS task. Given the observed sequence, CRF has a single exponential model for the joint probability of the entire sequence of labels, while maximum entropy markov model (MEMM) BIBREF29 uses per-state exponential models for the conditional probabilities of next states BIBREF4 . Therefore, it can solve the label bias problem effectively. Compared with neural networks, it has less dependency on the corpus size.
First, we pre-process EHRs at the character-level, separating each character of raw EHRs. For instance, given a sentence INLINEFORM0 , where INLINEFORM1 represents the INLINEFORM2 -th character, the separated form is INLINEFORM3 . Then, we employ Word2Vec BIBREF30 to train pre-processed EHRs to get character embeddings. To capture interactions between adjacent characters, K-means clustering algorithm BIBREF31 is utilized to feature the coherence over characters. In general, K-means divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher. With each iteration, K-means can classify EHR characters into the nearest cluster based on distance to the mean vector. Then, recalculating and adjusting the mean vectors of these clusters until the mean vector converges. K-means features explicitly show the difference between two adjacent characters and even multiple characters. Finally, we additionally add K-means clustering features to the input of CRF-based segmenter. The segmenter makes positional tagging decisions over individual characters. For example, a Chinese segmented sentence UTF8gkai“病人/长期/于/我院/肾病科/住院/治疗/。/" (The patient was hospitalized for a long time in the nephrology department of our hospital.) is labeled as `BEBESBEBMEBEBES'.
Information Entropy Based Scoring Model
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
We use gated neural network and information entropy to capture the likelihood of the segment being a legal word. The architecture of word score model is depicted in Fig. FIGREF12 .
Gated Combination Neural Network (GCNN)
To effectively learn word representations through character embeddings, we use GCNN BIBREF32 . The architecture of GCNN is demonstrated in Fig. FIGREF13 , which includes update gate and reset gate. The gated mechanism not only captures the characteristics of the characters themselves, but also utilizes the interaction between the characters. There are two types of gates in this network structure: reset gates and update gates. These two gated vectors determine the final output of the gated recurrent neural network, where the update gate helps the model determine what to be passed, and the reset gate primarily helps the model decide what to be cleared. In particular, the word embedding of a word with INLINEFORM0 characters can be computed as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the i-th character INLINEFORM3 respectively, the combination vector INLINEFORM4 is formalized as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are reset gates for characters.
Left and Right Branch Information Entropy In general, each string in a sentence may be a word. However, compared with a string which is not a word, the string of a word is significantly more independent. The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string, which reflects the likelihood of a string being a word. The left and right branch information entropy can be formalized as follows: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the INLINEFORM1 -th candidate word, INLINEFORM2 denotes the character set, INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 . INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 . If the left and right branch information entropy of a candidate word is relatively high, the probability that the candidate word can be combined with the surrounded characters to form a word is low, thus the candidate word is likely to be a legal word.
To judge whether the candidate words in a segmented sentence are legal words, we compute the left and right entropy of each candidate word, then take average as the measurement standard: DISPLAYFORM0
We represent a segmented sentence with INLINEFORM0 candidate words as [ INLINEFORM1 , INLINEFORM2 ,..., INLINEFORM3 ], so the INLINEFORM4 ( INLINEFORM5 ) of the INLINEFORM6 -th candidate word is computed by its average entropy: DISPLAYFORM0
In this paper, we use LSTM to capture the coherence between words in a segmented sentence. This neural network is mainly an optimization for traditional RNN. RNN is widely used to deal with time-series prediction problems. The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 . Therefore, RNN can remember historical results. However, traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 . By adding a gated mechanism to RNN, LSTM effectively solves these problems, which motivates us to get the link score with LSTM. Formally, the LSTM unit performs the following operations at time step INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are the inputs of LSTM, all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained, and INLINEFORM5 is a set of bias parameter matrices to be trained. INLINEFORM6 and INLINEFORM7 operation respectively represent matrix element-wise multiplication and sigmoid function. In the LSTM unit, there are two hidden layers ( INLINEFORM8 , INLINEFORM9 ), where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient, while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers.
We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 , a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 : DISPLAYFORM0
Therefore, link score of next word embedding INLINEFORM0 can be computed as: DISPLAYFORM0
Due to the structure of LSTM, vector INLINEFORM0 contains important information of entire segmentation decisions. In this way, the link score gets the result of the sequence-level word segmentation, not just word-level.
Intuitively, we can compute the score of a segmented sequence by summing up word scores and link scores. However, we find that a sequence with more candidate words tends to have higher sequence scores. Therefore, to alleviate the impact of the number of candidate words on sequence scores, we calculate final scores as follows: DISPLAYFORM0
where INLINEFORM0 denotes the INLINEFORM1 -th segmented sequence with INLINEFORM2 candidate words, and INLINEFORM3 represents the INLINEFORM4 -th candidate words in the segmented sequence.
When training the model, we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence. DISPLAYFORM0
where INLINEFORM0 is the loss function.
Datasets
We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records. We choose 4 different types with a total of 3868 records from them, which are first course reports, medical records, chief ward round records and discharge records. The detailed information of EHRs are listed in Table TABREF32 .
We split our datasets as follows. First, we randomly select 3200 records from 3868 records as unlabeled set. Then, we manually annotate remaining 668 records as labeled set, which contains 1170 sentences. Finally, we divide labeled set into train set and test set with the ratio of 7:3 randomly. Statistics of datasets are listed in Table TABREF33 .
Parameter Settings
To determine suitable parameters, we divide training set into two sets, the first 80% sentences as training set and the rest 20% sentences as validation set.
Character embedding dimensions and K-means clusters are two main parameters in the CRF-based word segmenter.
In this paper, we choose character-based CRF without any features as baseline. First, we use Word2Vec to train character embeddings with dimensions of [`50', `100', `150', `200', `300', `400'] respectively, thus we obtain 6 different dimensional character embeddings. Second, these six types of character embeddings are used as the input to K-means algorithm with the number of clusters [`50', `100', `200', `300', `400', `500', `600'] respectively to capture the corresponding features of character embeddings. Then, we add K-means clustering features to baseline for training. As can be seen from Fig. FIGREF36 , when the character embedding dimension INLINEFORM0 = 150 and the number of clusters INLINEFORM1 = 400, CRF-based word segmenter performs best, so these two parameters are used in subsequent experiments.
Hyper-parameters of neural network have a great impact on the performance. The hyper-parameters we choose are listed in Table TABREF38 .
The dimension of character embeddings is set as same as the parameter used in CRF-based word segmenter and the number of hidden units is also set to be the same as it. Maximum word length is ralated to the number of parameters in GCNN unit. Since there are many long medical terminologies in EHRs, we set the maximum word length as 6. In addition, dropout is an effective way to prevent neural networks from overfitting BIBREF35 . To avoid overfitting, we drop the input layer of the scoring model with the rate of 20%.
Experimental Results
Our work experimentally compares two mainstream CWS tools (LTP and Jieba) on training and testing sets. These two tools are widely used and recognized due to their high INLINEFORM0 -score of word segmentation in general fields. However, in specific fields, there are many terminologies and uncommon words, which lead to the unsatisfactory performance of segmentation results. To solve the problem of word segmentation in specific fields, these two tools provide a custom dictionary for users. In the experiments, we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results. We manually construct the dictionary when labeling EHRs.
From the results in Table TABREF41 , we find that Jieba benefits a lot from the external dictionary. However, the Recall of LTP decreases when joining the domain dictionary. Generally speaking, since these two tools are trained by general domain corpus, the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields.
To investigate the effectiveness of K-means features in CRF-based segmenter, we also compare K-means with 3 different clustering features, including MeanShift BIBREF36 , SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets. From the results in Table TABREF43 , by adding additional clustering features in CRF-based segmenter, there is a significant improvement of INLINEFORM0 -score, which indicates that clustering features can effectively capture the semantic coherence between characters. Among these clustering features, K-means performs best, so we utlize K-means results as additional features for CRF-based segmenter.
In this experiment, since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 , we compare our proposed strategy with uncertainty sampling in active learning. We conduct our experiments as follows. First, we employ CRF-based segmenter to annotate the unlabeled set. Then, sampling strategy in active learning selects a part of samples for annotators to relabel. Finally, the relabeled samples are added to train set for segmenter to re-train. Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences, while uncertainty sampling suggests relabeling samples that are closest to the segmenter’s decision boundary.
Generally, two main parameters in active learning are the numbers of iterations and samples selected per iteration. To fairly investigate the influence of two parameters, we compare our proposed strategy with uncertainty sampling on the same parameter. We find that though the number of iterations is large enough, it has a limited impact on the performance of segmenter. Therefore, we choose 30 as the number of iterations, which is a good trade-off between speed and performance. As for the number of samples selected per iteration, there are 6078 sentences in unlabeled set, considering the high cost of relabeling, we set four sizes of samples selected per iteration, which are 2%, 5%, 8% and 11%.
The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig. FIGREF45 , where x-axis represents the number of iterations and y-axis denotes the INLINEFORM0 -score of the segmenter. Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations, indicating that scoring strategy is more capable of selecting representative samples.
Furthermore, we also investigate the relations between the best INLINEFORM0 -score and corresponding number of iteration on two sampling strategies, which is depicted in Fig. FIGREF46 .
It is observed that in our proposed scoring model, with the proportion of relabeled data increasing, the iteration number of reaching the optimal word segmentation result is decreasing, but the INLINEFORM0 -score of CRF-based word segmenter is also gradually decreasing. When the proportion is 2%, the segmenter reaches the highest INLINEFORM1 -score: 90.62%. Obviously, our proposed strategy outperforms uncertainty sampling by a large margin. Our proposed method needs only 2% relabeled samples to obtain INLINEFORM2 -score of 90.62%, while uncertainty sampling requires 8% samples to reach its best INLINEFORM3 -score of 88.98%, which indicates that with our proposed method, we only need to manually relabel a small number of samples to achieve a desired segmentation result.
Conclusion and Future Work
To relieve the efforts of EHRs annotation, we propose an effective word segmentation method based on active learning, in which the sampling strategy is a scoring model combining information entropy with neural network. Compared with the mainstream uncertainty sampling, our strategy selects samples from statistical perspective and deep learning level. In addition, to capture coherence between characters, we add K-means clustering features to CRF-based word segmenter. Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, we evaluate our method on CWS task. Compared with uncertainty sampling, our method requires 6% less relabeled samples to achieve better performance, which proves that our method can save the cost of manual annotation to a certain extent.
In future, we plan to employ other widely-used deep neural networks, such as convolutional neural network and attention mechanism, in the research of EHRs segmentation. Then, we believe that our method can be applied to other tasks as well, so we will fully investigate the application of our method in other tasks, such as NER and relation extraction.
Acknowledgment
The authors would like to appreciate any suggestions or comments from the anonymous reviewers. This work was supported by the National Natural Science Foundation of China (No. 61772201) and the National Key R&D Program of China for “Precision medical research" (No. 2018YFC0910550). | the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history |
53a0763eff99a8148585ac642705637874be69d4 | 53a0763eff99a8148585ac642705637874be69d4_0 | Q: How does the active learning model work?
Text: Introduction
Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.
CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains.
One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks.
Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter.
To sum up, the main contributions of our work are summarized as follows:
The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work.
Chinese Word Segmentation
In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .
Supervised Learning Methods. Initially, supervised learning methods were widely-used in CWS. Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters. Zhao et al. BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection. However, these methods greatly rely on manual feature engineering BIBREF17 , while handcrafted features are difficult to design, and the size of these features is usually very large BIBREF6 .
Deep Learning Methods. Recently, neural networks have been applied in CWS tasks. To name a few, Zheng et al. BIBREF14 used deep layers of neural networks to learn feature representations of characters. Chen et al. BIBREF6 adopted LSTM to capture the previous important information. Chen et al. BIBREF18 proposed a gated recursive neural network (GRNN), which contains reset and update gates to incorporate the complicated combinations of characters. Jiang and Tang BIBREF19 proposed a sequence-to-sequence transformer model to avoid overfitting and capture character information at the distant site of a sentence. Yang et al. BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM (LaLSTM) network. However, general word segmentation models do not work well in specific field due to lack of annotated training data.
Currently, a handful of domain-specific CWS approaches have been studied, but they focused on decentralized domains. In the metallurgical field, Shao et al. BIBREF15 proposed a domain-specific CWS method based on Bi-LSTM model. In the medical field, Xing et al. BIBREF8 proposed an adaptive multi-task transfer learning framework to fully leverage domain-invariant knowledge from high resource domain to medical domain. Meanwhile, transfer learning still greatly focuses on the corpus in general domain. When it comes to the specific domain, large amounts of manually-annotated data is necessary. Active learning can solve this problem to a certain extent. However, due to the challenges faced by performing active learning on CWS, only a few studies have been conducted. On judgements, Yan et al. BIBREF21 adopted the local annotation strategy, which selects substrings around the informative characters in active learning. However, their method still stays at the statistical level. Unlike the above method, we propose an active learning approach for CWS in medical text, which combines information entropy with neural network to effectively reduce annotation cost.
Active Learning
Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .
The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes: DISPLAYFORM0
where INLINEFORM0 is a multi-dimensional feature vector, INLINEFORM1 is its binary label, and INLINEFORM2 is the predicted probability, through which a classifier trained on training sets can map features to labels. However, in some complicated tasks, such as CWS and NER, only considering the uncertainty of classifier is obviously not enough.
Active Learning for Chinese Word Segmentation
Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.
Fig. FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning. First, we train a CRF-based segmenter by train set. Then, the segmenter is employed to annotate the unlabeled set roughly. Subsequently, information entropy based scoring model picks INLINEFORM0 -lowest ranking samples for annotators to relabel. Meanwhile, the train sets and unlabeled sets are updated. Finally, we re-train the segmenter. The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold. [!ht] Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 , unlabeled data INLINEFORM2 , the number of iterations INLINEFORM3 , the number of samples selected per iteration INLINEFORM4 , partitioning function INLINEFORM5 , size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize: INLINEFORM9
train a word segmenter INLINEFORM0
estimate the test set loss INLINEFORM0
label INLINEFORM0 by INLINEFORM1
INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model
select INLINEFORM0 -lowest ranking samples INLINEFORM1
relabel INLINEFORM0 by annotators
form a new labeled dataset INLINEFORM0
form a new unlabeled dataset INLINEFORM0
train a word segmenter INLINEFORM0
estimate the new test loss INLINEFORM0
compute the loss reduction INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3
CRF-based Word Segmenter
CWS can be formalized as a sequence labeling problem with character position tags, which are (`B', `M', `E', `S'). So, we convert the labeled data into the `BMES' format, in which each character in the sequence is assigned into a label as follows one by one: B=beginning of a word, M=middle of a word, E=end of a word and S=single word.
In this paper, we use CRF as a training model for CWS task. Given the observed sequence, CRF has a single exponential model for the joint probability of the entire sequence of labels, while maximum entropy markov model (MEMM) BIBREF29 uses per-state exponential models for the conditional probabilities of next states BIBREF4 . Therefore, it can solve the label bias problem effectively. Compared with neural networks, it has less dependency on the corpus size.
First, we pre-process EHRs at the character-level, separating each character of raw EHRs. For instance, given a sentence INLINEFORM0 , where INLINEFORM1 represents the INLINEFORM2 -th character, the separated form is INLINEFORM3 . Then, we employ Word2Vec BIBREF30 to train pre-processed EHRs to get character embeddings. To capture interactions between adjacent characters, K-means clustering algorithm BIBREF31 is utilized to feature the coherence over characters. In general, K-means divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher. With each iteration, K-means can classify EHR characters into the nearest cluster based on distance to the mean vector. Then, recalculating and adjusting the mean vectors of these clusters until the mean vector converges. K-means features explicitly show the difference between two adjacent characters and even multiple characters. Finally, we additionally add K-means clustering features to the input of CRF-based segmenter. The segmenter makes positional tagging decisions over individual characters. For example, a Chinese segmented sentence UTF8gkai“病人/长期/于/我院/肾病科/住院/治疗/。/" (The patient was hospitalized for a long time in the nephrology department of our hospital.) is labeled as `BEBESBEBMEBEBES'.
Information Entropy Based Scoring Model
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
We use gated neural network and information entropy to capture the likelihood of the segment being a legal word. The architecture of word score model is depicted in Fig. FIGREF12 .
Gated Combination Neural Network (GCNN)
To effectively learn word representations through character embeddings, we use GCNN BIBREF32 . The architecture of GCNN is demonstrated in Fig. FIGREF13 , which includes update gate and reset gate. The gated mechanism not only captures the characteristics of the characters themselves, but also utilizes the interaction between the characters. There are two types of gates in this network structure: reset gates and update gates. These two gated vectors determine the final output of the gated recurrent neural network, where the update gate helps the model determine what to be passed, and the reset gate primarily helps the model decide what to be cleared. In particular, the word embedding of a word with INLINEFORM0 characters can be computed as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the i-th character INLINEFORM3 respectively, the combination vector INLINEFORM4 is formalized as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are reset gates for characters.
Left and Right Branch Information Entropy In general, each string in a sentence may be a word. However, compared with a string which is not a word, the string of a word is significantly more independent. The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string, which reflects the likelihood of a string being a word. The left and right branch information entropy can be formalized as follows: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the INLINEFORM1 -th candidate word, INLINEFORM2 denotes the character set, INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 . INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 . If the left and right branch information entropy of a candidate word is relatively high, the probability that the candidate word can be combined with the surrounded characters to form a word is low, thus the candidate word is likely to be a legal word.
To judge whether the candidate words in a segmented sentence are legal words, we compute the left and right entropy of each candidate word, then take average as the measurement standard: DISPLAYFORM0
We represent a segmented sentence with INLINEFORM0 candidate words as [ INLINEFORM1 , INLINEFORM2 ,..., INLINEFORM3 ], so the INLINEFORM4 ( INLINEFORM5 ) of the INLINEFORM6 -th candidate word is computed by its average entropy: DISPLAYFORM0
In this paper, we use LSTM to capture the coherence between words in a segmented sentence. This neural network is mainly an optimization for traditional RNN. RNN is widely used to deal with time-series prediction problems. The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 . Therefore, RNN can remember historical results. However, traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 . By adding a gated mechanism to RNN, LSTM effectively solves these problems, which motivates us to get the link score with LSTM. Formally, the LSTM unit performs the following operations at time step INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are the inputs of LSTM, all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained, and INLINEFORM5 is a set of bias parameter matrices to be trained. INLINEFORM6 and INLINEFORM7 operation respectively represent matrix element-wise multiplication and sigmoid function. In the LSTM unit, there are two hidden layers ( INLINEFORM8 , INLINEFORM9 ), where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient, while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers.
We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 , a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 : DISPLAYFORM0
Therefore, link score of next word embedding INLINEFORM0 can be computed as: DISPLAYFORM0
Due to the structure of LSTM, vector INLINEFORM0 contains important information of entire segmentation decisions. In this way, the link score gets the result of the sequence-level word segmentation, not just word-level.
Intuitively, we can compute the score of a segmented sequence by summing up word scores and link scores. However, we find that a sequence with more candidate words tends to have higher sequence scores. Therefore, to alleviate the impact of the number of candidate words on sequence scores, we calculate final scores as follows: DISPLAYFORM0
where INLINEFORM0 denotes the INLINEFORM1 -th segmented sequence with INLINEFORM2 candidate words, and INLINEFORM3 represents the INLINEFORM4 -th candidate words in the segmented sequence.
When training the model, we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence. DISPLAYFORM0
where INLINEFORM0 is the loss function.
Datasets
We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records. We choose 4 different types with a total of 3868 records from them, which are first course reports, medical records, chief ward round records and discharge records. The detailed information of EHRs are listed in Table TABREF32 .
We split our datasets as follows. First, we randomly select 3200 records from 3868 records as unlabeled set. Then, we manually annotate remaining 668 records as labeled set, which contains 1170 sentences. Finally, we divide labeled set into train set and test set with the ratio of 7:3 randomly. Statistics of datasets are listed in Table TABREF33 .
Parameter Settings
To determine suitable parameters, we divide training set into two sets, the first 80% sentences as training set and the rest 20% sentences as validation set.
Character embedding dimensions and K-means clusters are two main parameters in the CRF-based word segmenter.
In this paper, we choose character-based CRF without any features as baseline. First, we use Word2Vec to train character embeddings with dimensions of [`50', `100', `150', `200', `300', `400'] respectively, thus we obtain 6 different dimensional character embeddings. Second, these six types of character embeddings are used as the input to K-means algorithm with the number of clusters [`50', `100', `200', `300', `400', `500', `600'] respectively to capture the corresponding features of character embeddings. Then, we add K-means clustering features to baseline for training. As can be seen from Fig. FIGREF36 , when the character embedding dimension INLINEFORM0 = 150 and the number of clusters INLINEFORM1 = 400, CRF-based word segmenter performs best, so these two parameters are used in subsequent experiments.
Hyper-parameters of neural network have a great impact on the performance. The hyper-parameters we choose are listed in Table TABREF38 .
The dimension of character embeddings is set as same as the parameter used in CRF-based word segmenter and the number of hidden units is also set to be the same as it. Maximum word length is ralated to the number of parameters in GCNN unit. Since there are many long medical terminologies in EHRs, we set the maximum word length as 6. In addition, dropout is an effective way to prevent neural networks from overfitting BIBREF35 . To avoid overfitting, we drop the input layer of the scoring model with the rate of 20%.
Experimental Results
Our work experimentally compares two mainstream CWS tools (LTP and Jieba) on training and testing sets. These two tools are widely used and recognized due to their high INLINEFORM0 -score of word segmentation in general fields. However, in specific fields, there are many terminologies and uncommon words, which lead to the unsatisfactory performance of segmentation results. To solve the problem of word segmentation in specific fields, these two tools provide a custom dictionary for users. In the experiments, we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results. We manually construct the dictionary when labeling EHRs.
From the results in Table TABREF41 , we find that Jieba benefits a lot from the external dictionary. However, the Recall of LTP decreases when joining the domain dictionary. Generally speaking, since these two tools are trained by general domain corpus, the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields.
To investigate the effectiveness of K-means features in CRF-based segmenter, we also compare K-means with 3 different clustering features, including MeanShift BIBREF36 , SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets. From the results in Table TABREF43 , by adding additional clustering features in CRF-based segmenter, there is a significant improvement of INLINEFORM0 -score, which indicates that clustering features can effectively capture the semantic coherence between characters. Among these clustering features, K-means performs best, so we utlize K-means results as additional features for CRF-based segmenter.
In this experiment, since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 , we compare our proposed strategy with uncertainty sampling in active learning. We conduct our experiments as follows. First, we employ CRF-based segmenter to annotate the unlabeled set. Then, sampling strategy in active learning selects a part of samples for annotators to relabel. Finally, the relabeled samples are added to train set for segmenter to re-train. Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences, while uncertainty sampling suggests relabeling samples that are closest to the segmenter’s decision boundary.
Generally, two main parameters in active learning are the numbers of iterations and samples selected per iteration. To fairly investigate the influence of two parameters, we compare our proposed strategy with uncertainty sampling on the same parameter. We find that though the number of iterations is large enough, it has a limited impact on the performance of segmenter. Therefore, we choose 30 as the number of iterations, which is a good trade-off between speed and performance. As for the number of samples selected per iteration, there are 6078 sentences in unlabeled set, considering the high cost of relabeling, we set four sizes of samples selected per iteration, which are 2%, 5%, 8% and 11%.
The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig. FIGREF45 , where x-axis represents the number of iterations and y-axis denotes the INLINEFORM0 -score of the segmenter. Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations, indicating that scoring strategy is more capable of selecting representative samples.
Furthermore, we also investigate the relations between the best INLINEFORM0 -score and corresponding number of iteration on two sampling strategies, which is depicted in Fig. FIGREF46 .
It is observed that in our proposed scoring model, with the proportion of relabeled data increasing, the iteration number of reaching the optimal word segmentation result is decreasing, but the INLINEFORM0 -score of CRF-based word segmenter is also gradually decreasing. When the proportion is 2%, the segmenter reaches the highest INLINEFORM1 -score: 90.62%. Obviously, our proposed strategy outperforms uncertainty sampling by a large margin. Our proposed method needs only 2% relabeled samples to obtain INLINEFORM2 -score of 90.62%, while uncertainty sampling requires 8% samples to reach its best INLINEFORM3 -score of 88.98%, which indicates that with our proposed method, we only need to manually relabel a small number of samples to achieve a desired segmentation result.
Conclusion and Future Work
To relieve the efforts of EHRs annotation, we propose an effective word segmentation method based on active learning, in which the sampling strategy is a scoring model combining information entropy with neural network. Compared with the mainstream uncertainty sampling, our strategy selects samples from statistical perspective and deep learning level. In addition, to capture coherence between characters, we add K-means clustering features to CRF-based word segmenter. Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, we evaluate our method on CWS task. Compared with uncertainty sampling, our method requires 6% less relabeled samples to achieve better performance, which proves that our method can save the cost of manual annotation to a certain extent.
In future, we plan to employ other widely-used deep neural networks, such as convolutional neural network and attention mechanism, in the research of EHRs segmentation. Then, we believe that our method can be applied to other tasks as well, so we will fully investigate the application of our method in other tasks, such as NER and relation extraction.
Acknowledgment
The authors would like to appreciate any suggestions or comments from the anonymous reviewers. This work was supported by the National Natural Science Foundation of China (No. 61772201) and the National Key R&D Program of China for “Precision medical research" (No. 2018YFC0910550). | Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively. |
0bfed6f9cfe93617c5195c848583e3945f2002ff | 0bfed6f9cfe93617c5195c848583e3945f2002ff_0 | Q: Which neural network architectures are employed?
Text: Introduction
Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.
CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains.
One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks.
Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter.
To sum up, the main contributions of our work are summarized as follows:
The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work.
Chinese Word Segmentation
In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .
Supervised Learning Methods. Initially, supervised learning methods were widely-used in CWS. Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters. Zhao et al. BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection. However, these methods greatly rely on manual feature engineering BIBREF17 , while handcrafted features are difficult to design, and the size of these features is usually very large BIBREF6 .
Deep Learning Methods. Recently, neural networks have been applied in CWS tasks. To name a few, Zheng et al. BIBREF14 used deep layers of neural networks to learn feature representations of characters. Chen et al. BIBREF6 adopted LSTM to capture the previous important information. Chen et al. BIBREF18 proposed a gated recursive neural network (GRNN), which contains reset and update gates to incorporate the complicated combinations of characters. Jiang and Tang BIBREF19 proposed a sequence-to-sequence transformer model to avoid overfitting and capture character information at the distant site of a sentence. Yang et al. BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM (LaLSTM) network. However, general word segmentation models do not work well in specific field due to lack of annotated training data.
Currently, a handful of domain-specific CWS approaches have been studied, but they focused on decentralized domains. In the metallurgical field, Shao et al. BIBREF15 proposed a domain-specific CWS method based on Bi-LSTM model. In the medical field, Xing et al. BIBREF8 proposed an adaptive multi-task transfer learning framework to fully leverage domain-invariant knowledge from high resource domain to medical domain. Meanwhile, transfer learning still greatly focuses on the corpus in general domain. When it comes to the specific domain, large amounts of manually-annotated data is necessary. Active learning can solve this problem to a certain extent. However, due to the challenges faced by performing active learning on CWS, only a few studies have been conducted. On judgements, Yan et al. BIBREF21 adopted the local annotation strategy, which selects substrings around the informative characters in active learning. However, their method still stays at the statistical level. Unlike the above method, we propose an active learning approach for CWS in medical text, which combines information entropy with neural network to effectively reduce annotation cost.
Active Learning
Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .
The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes: DISPLAYFORM0
where INLINEFORM0 is a multi-dimensional feature vector, INLINEFORM1 is its binary label, and INLINEFORM2 is the predicted probability, through which a classifier trained on training sets can map features to labels. However, in some complicated tasks, such as CWS and NER, only considering the uncertainty of classifier is obviously not enough.
Active Learning for Chinese Word Segmentation
Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.
Fig. FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning. First, we train a CRF-based segmenter by train set. Then, the segmenter is employed to annotate the unlabeled set roughly. Subsequently, information entropy based scoring model picks INLINEFORM0 -lowest ranking samples for annotators to relabel. Meanwhile, the train sets and unlabeled sets are updated. Finally, we re-train the segmenter. The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold. [!ht] Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 , unlabeled data INLINEFORM2 , the number of iterations INLINEFORM3 , the number of samples selected per iteration INLINEFORM4 , partitioning function INLINEFORM5 , size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize: INLINEFORM9
train a word segmenter INLINEFORM0
estimate the test set loss INLINEFORM0
label INLINEFORM0 by INLINEFORM1
INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model
select INLINEFORM0 -lowest ranking samples INLINEFORM1
relabel INLINEFORM0 by annotators
form a new labeled dataset INLINEFORM0
form a new unlabeled dataset INLINEFORM0
train a word segmenter INLINEFORM0
estimate the new test loss INLINEFORM0
compute the loss reduction INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3
CRF-based Word Segmenter
CWS can be formalized as a sequence labeling problem with character position tags, which are (`B', `M', `E', `S'). So, we convert the labeled data into the `BMES' format, in which each character in the sequence is assigned into a label as follows one by one: B=beginning of a word, M=middle of a word, E=end of a word and S=single word.
In this paper, we use CRF as a training model for CWS task. Given the observed sequence, CRF has a single exponential model for the joint probability of the entire sequence of labels, while maximum entropy markov model (MEMM) BIBREF29 uses per-state exponential models for the conditional probabilities of next states BIBREF4 . Therefore, it can solve the label bias problem effectively. Compared with neural networks, it has less dependency on the corpus size.
First, we pre-process EHRs at the character-level, separating each character of raw EHRs. For instance, given a sentence INLINEFORM0 , where INLINEFORM1 represents the INLINEFORM2 -th character, the separated form is INLINEFORM3 . Then, we employ Word2Vec BIBREF30 to train pre-processed EHRs to get character embeddings. To capture interactions between adjacent characters, K-means clustering algorithm BIBREF31 is utilized to feature the coherence over characters. In general, K-means divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher. With each iteration, K-means can classify EHR characters into the nearest cluster based on distance to the mean vector. Then, recalculating and adjusting the mean vectors of these clusters until the mean vector converges. K-means features explicitly show the difference between two adjacent characters and even multiple characters. Finally, we additionally add K-means clustering features to the input of CRF-based segmenter. The segmenter makes positional tagging decisions over individual characters. For example, a Chinese segmented sentence UTF8gkai“病人/长期/于/我院/肾病科/住院/治疗/。/" (The patient was hospitalized for a long time in the nephrology department of our hospital.) is labeled as `BEBESBEBMEBEBES'.
Information Entropy Based Scoring Model
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
We use gated neural network and information entropy to capture the likelihood of the segment being a legal word. The architecture of word score model is depicted in Fig. FIGREF12 .
Gated Combination Neural Network (GCNN)
To effectively learn word representations through character embeddings, we use GCNN BIBREF32 . The architecture of GCNN is demonstrated in Fig. FIGREF13 , which includes update gate and reset gate. The gated mechanism not only captures the characteristics of the characters themselves, but also utilizes the interaction between the characters. There are two types of gates in this network structure: reset gates and update gates. These two gated vectors determine the final output of the gated recurrent neural network, where the update gate helps the model determine what to be passed, and the reset gate primarily helps the model decide what to be cleared. In particular, the word embedding of a word with INLINEFORM0 characters can be computed as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the i-th character INLINEFORM3 respectively, the combination vector INLINEFORM4 is formalized as: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are reset gates for characters.
Left and Right Branch Information Entropy In general, each string in a sentence may be a word. However, compared with a string which is not a word, the string of a word is significantly more independent. The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string, which reflects the likelihood of a string being a word. The left and right branch information entropy can be formalized as follows: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 denotes the INLINEFORM1 -th candidate word, INLINEFORM2 denotes the character set, INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 . INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 . If the left and right branch information entropy of a candidate word is relatively high, the probability that the candidate word can be combined with the surrounded characters to form a word is low, thus the candidate word is likely to be a legal word.
To judge whether the candidate words in a segmented sentence are legal words, we compute the left and right entropy of each candidate word, then take average as the measurement standard: DISPLAYFORM0
We represent a segmented sentence with INLINEFORM0 candidate words as [ INLINEFORM1 , INLINEFORM2 ,..., INLINEFORM3 ], so the INLINEFORM4 ( INLINEFORM5 ) of the INLINEFORM6 -th candidate word is computed by its average entropy: DISPLAYFORM0
In this paper, we use LSTM to capture the coherence between words in a segmented sentence. This neural network is mainly an optimization for traditional RNN. RNN is widely used to deal with time-series prediction problems. The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 . Therefore, RNN can remember historical results. However, traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 . By adding a gated mechanism to RNN, LSTM effectively solves these problems, which motivates us to get the link score with LSTM. Formally, the LSTM unit performs the following operations at time step INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are the inputs of LSTM, all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained, and INLINEFORM5 is a set of bias parameter matrices to be trained. INLINEFORM6 and INLINEFORM7 operation respectively represent matrix element-wise multiplication and sigmoid function. In the LSTM unit, there are two hidden layers ( INLINEFORM8 , INLINEFORM9 ), where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient, while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers.
We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 , a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 : DISPLAYFORM0
Therefore, link score of next word embedding INLINEFORM0 can be computed as: DISPLAYFORM0
Due to the structure of LSTM, vector INLINEFORM0 contains important information of entire segmentation decisions. In this way, the link score gets the result of the sequence-level word segmentation, not just word-level.
Intuitively, we can compute the score of a segmented sequence by summing up word scores and link scores. However, we find that a sequence with more candidate words tends to have higher sequence scores. Therefore, to alleviate the impact of the number of candidate words on sequence scores, we calculate final scores as follows: DISPLAYFORM0
where INLINEFORM0 denotes the INLINEFORM1 -th segmented sequence with INLINEFORM2 candidate words, and INLINEFORM3 represents the INLINEFORM4 -th candidate words in the segmented sequence.
When training the model, we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence. DISPLAYFORM0
where INLINEFORM0 is the loss function.
Datasets
We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records. We choose 4 different types with a total of 3868 records from them, which are first course reports, medical records, chief ward round records and discharge records. The detailed information of EHRs are listed in Table TABREF32 .
We split our datasets as follows. First, we randomly select 3200 records from 3868 records as unlabeled set. Then, we manually annotate remaining 668 records as labeled set, which contains 1170 sentences. Finally, we divide labeled set into train set and test set with the ratio of 7:3 randomly. Statistics of datasets are listed in Table TABREF33 .
Parameter Settings
To determine suitable parameters, we divide training set into two sets, the first 80% sentences as training set and the rest 20% sentences as validation set.
Character embedding dimensions and K-means clusters are two main parameters in the CRF-based word segmenter.
In this paper, we choose character-based CRF without any features as baseline. First, we use Word2Vec to train character embeddings with dimensions of [`50', `100', `150', `200', `300', `400'] respectively, thus we obtain 6 different dimensional character embeddings. Second, these six types of character embeddings are used as the input to K-means algorithm with the number of clusters [`50', `100', `200', `300', `400', `500', `600'] respectively to capture the corresponding features of character embeddings. Then, we add K-means clustering features to baseline for training. As can be seen from Fig. FIGREF36 , when the character embedding dimension INLINEFORM0 = 150 and the number of clusters INLINEFORM1 = 400, CRF-based word segmenter performs best, so these two parameters are used in subsequent experiments.
Hyper-parameters of neural network have a great impact on the performance. The hyper-parameters we choose are listed in Table TABREF38 .
The dimension of character embeddings is set as same as the parameter used in CRF-based word segmenter and the number of hidden units is also set to be the same as it. Maximum word length is ralated to the number of parameters in GCNN unit. Since there are many long medical terminologies in EHRs, we set the maximum word length as 6. In addition, dropout is an effective way to prevent neural networks from overfitting BIBREF35 . To avoid overfitting, we drop the input layer of the scoring model with the rate of 20%.
Experimental Results
Our work experimentally compares two mainstream CWS tools (LTP and Jieba) on training and testing sets. These two tools are widely used and recognized due to their high INLINEFORM0 -score of word segmentation in general fields. However, in specific fields, there are many terminologies and uncommon words, which lead to the unsatisfactory performance of segmentation results. To solve the problem of word segmentation in specific fields, these two tools provide a custom dictionary for users. In the experiments, we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results. We manually construct the dictionary when labeling EHRs.
From the results in Table TABREF41 , we find that Jieba benefits a lot from the external dictionary. However, the Recall of LTP decreases when joining the domain dictionary. Generally speaking, since these two tools are trained by general domain corpus, the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields.
To investigate the effectiveness of K-means features in CRF-based segmenter, we also compare K-means with 3 different clustering features, including MeanShift BIBREF36 , SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets. From the results in Table TABREF43 , by adding additional clustering features in CRF-based segmenter, there is a significant improvement of INLINEFORM0 -score, which indicates that clustering features can effectively capture the semantic coherence between characters. Among these clustering features, K-means performs best, so we utlize K-means results as additional features for CRF-based segmenter.
In this experiment, since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 , we compare our proposed strategy with uncertainty sampling in active learning. We conduct our experiments as follows. First, we employ CRF-based segmenter to annotate the unlabeled set. Then, sampling strategy in active learning selects a part of samples for annotators to relabel. Finally, the relabeled samples are added to train set for segmenter to re-train. Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences, while uncertainty sampling suggests relabeling samples that are closest to the segmenter’s decision boundary.
Generally, two main parameters in active learning are the numbers of iterations and samples selected per iteration. To fairly investigate the influence of two parameters, we compare our proposed strategy with uncertainty sampling on the same parameter. We find that though the number of iterations is large enough, it has a limited impact on the performance of segmenter. Therefore, we choose 30 as the number of iterations, which is a good trade-off between speed and performance. As for the number of samples selected per iteration, there are 6078 sentences in unlabeled set, considering the high cost of relabeling, we set four sizes of samples selected per iteration, which are 2%, 5%, 8% and 11%.
The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig. FIGREF45 , where x-axis represents the number of iterations and y-axis denotes the INLINEFORM0 -score of the segmenter. Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations, indicating that scoring strategy is more capable of selecting representative samples.
Furthermore, we also investigate the relations between the best INLINEFORM0 -score and corresponding number of iteration on two sampling strategies, which is depicted in Fig. FIGREF46 .
It is observed that in our proposed scoring model, with the proportion of relabeled data increasing, the iteration number of reaching the optimal word segmentation result is decreasing, but the INLINEFORM0 -score of CRF-based word segmenter is also gradually decreasing. When the proportion is 2%, the segmenter reaches the highest INLINEFORM1 -score: 90.62%. Obviously, our proposed strategy outperforms uncertainty sampling by a large margin. Our proposed method needs only 2% relabeled samples to obtain INLINEFORM2 -score of 90.62%, while uncertainty sampling requires 8% samples to reach its best INLINEFORM3 -score of 88.98%, which indicates that with our proposed method, we only need to manually relabel a small number of samples to achieve a desired segmentation result.
Conclusion and Future Work
To relieve the efforts of EHRs annotation, we propose an effective word segmentation method based on active learning, in which the sampling strategy is a scoring model combining information entropy with neural network. Compared with the mainstream uncertainty sampling, our strategy selects samples from statistical perspective and deep learning level. In addition, to capture coherence between characters, we add K-means clustering features to CRF-based word segmenter. Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, we evaluate our method on CWS task. Compared with uncertainty sampling, our method requires 6% less relabeled samples to achieve better performance, which proves that our method can save the cost of manual annotation to a certain extent.
In future, we plan to employ other widely-used deep neural networks, such as convolutional neural network and attention mechanism, in the research of EHRs segmentation. Then, we believe that our method can be applied to other tasks as well, so we will fully investigate the application of our method in other tasks, such as NER and relation extraction.
Acknowledgment
The authors would like to appreciate any suggestions or comments from the anonymous reviewers. This work was supported by the National Natural Science Foundation of China (No. 61772201) and the National Key R&D Program of China for “Precision medical research" (No. 2018YFC0910550). | gated neural network |
352c081c93800df9654315e13a880d6387b91919 | 352c081c93800df9654315e13a880d6387b91919_0 | Q: What are the key points in the role of script knowledge that can be studied?
Text: Motivation
A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.
Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath.
A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts.
This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference.
The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type.
Collection via Amazon M-Turk
We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .
Texts were collected via the Amazon Mechanical Turk platform, which provides an opportunity to present an online task to humans (a.k.a. turkers). In order to gauge the effect of different M-Turk instructions on our task, we first conducted pilot experiments with different variants of instructions explaining the task. We finalized the instructions for the full data collection, asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words. The selected instruction variant resulted in comparably simple and explicit scenario-related stories. In the future we plan to collect more complex stories using different instructions. In total 190 turkers participated. All turkers were living in the USA and native speakers of English. We paid USD $0.50 per story to each turker. On average, the turkers took 9.37 minutes per story with a maximum duration of 17.38 minutes.
Data Statistics
Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.
For both flying in an airplane and baking a cake, the standard deviation is higher in comparison to other scenarios. This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios. In general, different people tend to describe situations subjectively, with a varying degree of detail. In contrast, texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens. Both planting a tree and taking a bath are simpler activities, which results in generally less complex texts.
The average pairwise word type overlap can be seen as a measure of lexical variety among stories: If it is high, the stories resemble each other more. We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here, indicating that most turkers used a similar vocabulary in their stories.
In general, the response quality was good. We had to discard 9% of the stories as these lacked the quality we were expecting. In total, we selected 910 stories for annotation.
Annotation
This section deals with the annotation of the data. We first describe the final annotation schema. Then, we describe the iterative process of corpus annotation and the refinement of the schema. This refinement was necessary due to the complexity of the annotation.
Annotation Schema
For each of the scenarios, we designed a specific annotation template. A script template consists of scenario-specific event and participant labels. An example of a template is shown in Table 1 . All NP heads in the corpus were annotated with a participant label; all verbs were annotated with an event label. For both participants and events, we also offered the label unclear if the annotator could not assign another label. We additionally annotated coreference chains between NPs. Thus, the process resulted in three layers of annotation: event types, participant types and coreference annotation. These are described in detail below.
As a first layer, we annotated event types. There are two kinds of event type labels, scenario-specific event type labels and general labels. The general labels are used across every scenario and mark general features, for example whether an event belongs to the scenario at all. For the scenario-specific labels, we designed an unique template for every scenario, with a list of script-relevant event types that were used as labels. Such labels include for example ScrEv_close_drain in taking a bath as in Example UID10 (see Figure 1 for a complete list for the taking a bath scenario)
I start by closing $_{\textsc {\scriptsize ScrEv\_close\_drain}}$ the drain at the bottom of the tub.
The general labels that were used in addition to the script-specific labels in every scenario are listed below:
ScrEv_other. An event that belongs to the scenario, but its event type occurs too infrequently (for details, see below, Section "Modification of the Schema" ). We used the label “other" because event classification would become too finegrained otherwise.
Example: After I am dried I put my new clothes on and clean up $_{\textsc {\scriptsize ScrEv\_other}}$ the bathroom.
RelNScrEv. Related non-script event. An event that can plausibly happen during the execution of the script and is related to it, but that is not part of the script.
Example: After finding on what I wanted to wear, I went into the bathroom and shut $_{\textsc {\scriptsize RelNScrEv}}$ the door.
UnrelEv. An event that is unrelated to the script.
Example: I sank into the bubbles and took $_{\textsc {\scriptsize UnrelEv}}$ a deep breath.
Additionally, the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking, as shown in Example UID10 . Today I took a bath $_{\textsc {\scriptsize Evoking}}$ in my new apartment.
As in the case of the event type labels, there are two kinds of participant labels: general labels and scenario-specific labels. The latter are part of the scenario-specific templates, e.g. ScrPart_drain in the taking a bath scenario, as can be seen in Example UID15 .
I start by closing the drain $_{\textsc {\scriptsize ScrPart\_drain}}$ at the bottom of the tub.
The general labels that are used across all scenarios mark noun phrases with scenario-independent features. There are the following general labels:
ScrPart_other. A participant that belongs to the scenario, but its participant type occurs only infrequently.
Example: I find my bath mat $_{\textsc {\scriptsize ScrPart\_other}}$ and lay it on the floor to keep the floor dry.
NPart. Non-participant. A referential NP that does not belong to the scenario.
Example: I washed myself carefully because I did not want to spill water onto the floor $_{\textsc {\scriptsize NPart}}$ .labeled
SuppVComp. A support verb complement. For further discussion of this label, see Section "Special Cases"
Example: I sank into the bubbles and took a deep breath $_{\textsc {\scriptsize SuppVComp}}$ .
Head_of_Partitive. The head of a partitive or a partitive-like construction. For a further discussion of this label cf. Section "Special Cases"
Example: I grabbed a bar $_{\textsc {\scriptsize Head\_of\_Partitive}}$ of soap and lathered my body.
No_label. A non-referential noun phrase that cannot be labeled with another label. Example: I sat for a moment $_{\textsc {\scriptsize No\_label}}$ , relaxing, allowing the warm water to sooth my skin.
All NPs labeled with one of the labels SuppVComp, Head_of_Partitive or No_label are considered to be non-referential. No_label is used mainly in four cases in our data: non-referential time expressions (in a while, a million times better), idioms (no matter what), the non-referential “it” (it felt amazing, it is better) and other abstracta (a lot better, a little bit).
In the first annotation phase, annotators were asked to mark verbs and noun phrases that have an event or participant type, that is not listed in the template, as MissScrEv/ MissScrPart (missing script event or participant, resp.). These annotations were used as a basis for extending the templates (see Section "Modification of the Schema" ) and replaced later by newly introduced labels or ScrEv_other and ScrPart_other respectively.
All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent. The annotation was done by linking heads of NPs (see Example UID21 , where the links are indicated by coindexing). As a rule, we assume that each element of a coreference chain is marked with the same participant type label.
I $ _{\textsc {\scriptsize Coref1}}$ washed my $ _{\textsc {\scriptsize Coref1}}$ entire body $ _{\textsc {\scriptsize Coref2}}$ , starting with my $ _{\textsc {\scriptsize Coref1}}$ face $ _{\textsc {\scriptsize Coref3}} $ and ending with the toes $ _{\textsc {\scriptsize Coref4}} $ . I $ _{\textsc {\scriptsize Coref1}}$ always wash my $ _{\textsc {\scriptsize Coref1}}$ toes $_{\textsc {\scriptsize Coref4}}$ very thoroughly ...
The assignment of an entity to a referent is not always trivial, as is shown in Example UID21 . There are some cases in which two discourse referents are grouped in a plural NP. In the example, those things refers to the group made up of shampoo, soap and sponge. In this case, we asked annotators to introduce a new coreference label, the name of which indicates which referents are grouped together (Coref_group_washing_tools). All NPs are then connected to the group phrase, resulting in an additional coreference chain.
I $ _{\textsc {\scriptsize Coref1}}$ made sure that I $ _{\textsc {\scriptsize Coref1}}$ have my $ _{\textsc {\scriptsize Coref1}}$ shampoo $ _{\textsc {\scriptsize Coref2 + Coref\_group\_washing\_tools}}$ , soap $_{\textsc {\scriptsize Coref3 + Coref\_group\_washing\_tools}}$ and sponge $ _{\textsc {\scriptsize Coref4 + Coref\_group\_washing\_tools}}$ ready to get in. Once I $ _{\textsc {\scriptsize Coref1}}$ have those things $ _{\textsc {\scriptsize Coref\_group\_washing\_tools}}$ I $ _{\textsc {\scriptsize Coref1}}$ sink into the bath. ... I $ _{\textsc {\scriptsize Coref1}}$ applied some soap $ _{\textsc {\scriptsize Coref1}}$0 on my $ _{\textsc {\scriptsize Coref1}}$1 body and used the sponge $ _{\textsc {\scriptsize Coref1}}$2 to scrub a bit. ... I $ _{\textsc {\scriptsize Coref1}}$3 rinsed the shampoo $ _{\textsc {\scriptsize Coref1}}$4 . Example UID21 thus contains the following coreference chains: Coref1: I $ _{\textsc {\scriptsize Coref1}}$5 I $ _{\textsc {\scriptsize Coref1}}$6 my $ _{\textsc {\scriptsize Coref1}}$7 I $ _{\textsc {\scriptsize Coref1}}$8 I $ _{\textsc {\scriptsize Coref1}}$9 I $ _{\textsc {\scriptsize Coref1}}$0 my $ _{\textsc {\scriptsize Coref1}}$1 I
Coref2: shampoo $\rightarrow $ shampoo
Coref3: soap $\rightarrow $ soap
Coref4: sponge $\rightarrow $ sponge
Coref_group_washing_ tools: shampoo $\rightarrow $ soap $\rightarrow $ sponge $\rightarrow $ things
Development of the Schema
The templates were carefully designed in an iterated process. For each scenario, one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories. For a subset of the scenarios, preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point. Subsequently, the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template. Necessary extensions and changes in the templates were discussed and agreed upon. Most of the cases of disagreement were related to the granularity of the event and participant types. We agreed on the script-specific functional equivalence as a guiding principle. For example, reading a book, listening to music and having a conversation are subsumed under the same event label in the flight scenario, because they have the common function of in-flight entertainment in the scenario. In contrast, we assumed different labels for the cake tin and other utensils (bowls etc.), since they have different functions in the baking a cake scenario and accordingly occur with different script events.
Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary. Instead, scripts describe cognitively prominent events in an activity. An example can be found in the flight scenario. While more than a third of the turkers mentioned the event of fastening the seat belts in the plane (buckle_seat_belt), no person wrote about undoing their seat belts again, although in reality both events appear equally often. Consequently, we added an event type label for buckling up, but no label for undoing the seat belts.
First Annotation Phase
We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section "Inter-Annotator Agreement" ). It was found to be sufficiently high.
Annotation of the corpus together with some pre- and post-processing of the data required about 500 hours of work. All stories were annotated with event and participant types (a total of 12,188 and 43,946 instances, respectively). On average there were 7 coreference chains per story with an average length of 6 tokens.
Modification of the Schema
After the first annotation round, we extended and changed the templates based on the results. As mentioned before, we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates. Based on the instances with these labels (a total of 941 and 1717 instances, respectively), we extended the guidelines to cover the sufficiently frequent cases. In order to include new labels for event and participant types, we tried to estimate the number of instances that would fall under a certain label. We added new labels according to the following conditions:
For the participant annotations, we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories (i.e. in approximately 5% of the stories).
For the event annotations, we chose those new labels for event types that would appear in at least 5 different stories.
In order to avoid too fine a granularity of the templates, all other instances of MissScrEv and MissScrPart were re-labeled with ScrEv_other and ScrPart_other. We also relabeled participants and events from the first annotation phase with ScrEv_other and ScrPart_other, if they did not meet the frequency requirements. The event label air_bathroom (the event of letting fresh air into the room after the bath), for example, was only used once in the stories, so we relabeled that instance to ScrEv_other.
Additionally, we looked at the DeScript corpus BIBREF3 , which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript (see Section "Comparison to the DeScript Corpus" ). Every such set contains event descriptions that describe a certain event type. We extended our templates with additional labels for these events, if they were not yet part of the template.
Special Cases
Noun-noun compounds were annotated twice with the same label (whole span plus the head noun), as indicated by Example UID31 . This redundant double annotation is motivated by potential processing requirements.
I get my (wash (cloth $ _{\textsc {\scriptsize ScrPart\_washing\_tools}} ))$ , $_{\textsc {\scriptsize ScrPart\_washing\_tools}} $ and put it under the water.
A special treatment was given to support verb constructions such as take time, get home or take a seat in Example UID32 . The semantics of the verb itself is highly underspecified in such constructions; the event type is largely dependent on the object NP. As shown in Example UID32 , we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp (support verb complement), indicating that it does not have a proper reference.
I step into the tub and take $ _{\textsc {\scriptsize ScrEv\_sink\_water}} $ a seat $ _{\textsc {\scriptsize SuppVComp}} $ .
We used the Head_of_Partitive label for the heads in partitive constructions, assuming that the only referential part of the construction is the complement. This is not completely correct, since different partitive heads vary in their degree of concreteness (cf. Examples UID33 and UID33 ), but we did not see a way to make the distinction sufficiently transparent to the annotators. Our seats were at the back $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of the train $ _{\textsc {\scriptsize ScrPart\_train}} $ . In the library you can always find a couple $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of interesting books $ _{\textsc {\scriptsize ScrPart\_book}} $ .
Group denoting NPs sometimes refer to groups whose members are instances of different participant types. In Example UID34 , the first-person plural pronoun refers to the group consisting of the passenger (I) and a non-participant (my friend). To avoid a proliferation of event type labels, we labeled these cases with Unclear.
I $ _{\textsc {\scriptsize {ScrPart\_passenger}}}$ wanted to visit my $_{\textsc {\scriptsize {ScrPart\_passenger}}}$ friend $ _{\textsc {\scriptsize {NPart}}}$ in New York. ... We $_{\textsc {\scriptsize Unclear}}$ met at the train station.
We made an exception for the Getting a Haircut scenario, where the mixed participant group consisting of the hairdresser and the customer occurs very often, as in Example UID34 . Here, we introduced the additional ad-hoc participant label Scr_Part_hairdresser_customer.
While Susan $_{\textsc {\scriptsize {ScrPart\_hairdresser}}}$ is cutting my $_{\textsc {\scriptsize {ScrPart\_customer}}}$ hair we $_{\textsc {\scriptsize Scr\_Part\_hairdresser\_customer}}$ usually talk a bit.
Inter-Annotator Agreement
In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.
For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement.
Annotated Corpus Statistics
Figure 5 gives an overview of the number of event and participant types provided in the templates. Taking a flight and getting a haircut stand out with a large number of both event and participant types, which is due to the inherent complexity of the scenarios. In contrast, planting a tree and going on a train contain the fewest labels. There are 19 event and participant types on average.
Figure 6 presents overview statistics about the usage of event labels, participant labels and coreference chain annotations. As can be seen, there are usually many more mentions of participants than events. For coreference chains, there are some chains that are really long (which also results in a large scenario-wise standard deviation). Usually, these chains describe the protagonist.
We also found again that the flying in an airplane scenario stands out in terms of participant mentions, event mentions and average number of coreference chains.
Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in. This indicates how relevant a participant is for the script. As can be seen, a small number of participants are highly prominent: cook, ingredients and cake are mentioned in every story. The fact that the protagonist appears most often consistently holds for all other scenarios, where the acting person appears in every story, and is mentioned most frequently.
Figure 8 shows the distribution of participant/event type labels over all appearances over all scenarios on average. The groups stand for the most frequently appearing label, the top 2 to 5 labels in terms of frequency and the top 6 to 10. ScrEv_other and ScrPart_other are shown separately. As can be seen, the most frequently used participant label (the protagonist) makes up about 40% of overall participant instances. The four labels that follow the protagonist in terms of frequency together appear in 37% of the cases. More than 2 out of 3 participants in total belong to one of only 5 labels.
In contrast, the distribution for events is more balanced. 14% of all event instances have the most prominent event type. ScrEv_other and ScrPart_other both appear as labels in at most 5% of all event and participant instantiations: The specific event and participant type labels in our templates cover by far most of the instances.
In Figure 9 , we grouped participants similarly into the first, the top 2-5 and top 6-10 most frequently appearing participant types. The figure shows for each of these groups the average frequency per story, and in the rightmost column the overall average. The results correspond to the findings from the last paragraph.
Comparison to the DeScript Corpus
As mentioned previously, the InScript corpus is part of a larger research project, in which also a corpus of a different kind, the DeScript corpus, was created. DeScript covers 40 scenarios, and also contains the 10 scenarios from InScript. This corpus contains texts that describe scripts on an abstract and generic level, while InScript contains instantiations of scripts in narrative texts. Script events in DeScript are described in a very simple, telegram-style language (see Figure 2 ). Since one of the long-term goals of the project is to align the InScript texts with the script structure given from DeScript, it is interesting to compare both resources.
The InScript corpus exhibits much more lexical variation than DeScript. Many approaches use the type-token ratio to measure this variance. However, this measure is known to be sensitive to text length (see e.g. Tweedie1998), which would result in very small values for InScript and relatively large ones for DeScript, given the large average difference of text lengths between the corpora. Instead, we decided to use the Measure of Textual Lexical Diversity (MTLD) (McCarthy2010, McCarthy2005), which is familiar in corpus linguistics. This metric measures the average number of tokens in a text that are needed to retain a type-token ratio above a certain threshold. If the MTLD for a text is high, many tokens are needed to lower the type-token ratio under the threshold, so the text is lexically diverse. In contrast, a low MTLD indicates that only a few words are needed to make the type-token ratio drop, so the lexical diversity is smaller. We use the threshold of 0.71, which is proposed by the authors as a well-proven value.
Figure 10 compares the lexical diversity of both resources. As can be seen, the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions, across all scenarios. For both resources, the flying in an airplane scenario is most diverse (as was also indicated above by the mean word type overlap). However, the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript. Thus, the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts. We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events. We excluded events for which there were less than 10 occurrences in DeScript or InScript. Since there is only an event annotation for 50 ESDs per scenario in DeScript, we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable.
Figure 11 shows as an example the entropy values for the event types in the going on a train scenario. As can be seen in the graph, the entropy for InScript is in general higher than for DeScript. In the stories, a wider variety of verbs is used to describe events. There are also large differences between events: While wait has a really low entropy, spend_time_train has an extremely high entropy value. This event type covers many different activities such as reading, sleeping etc.
Conclusion
In this paper we described the InScript corpus of 1,000 narrative texts annotated with script structure and coreference information. We described the annotation process, various difficulties encountered during annotation and different remedies that were taken to overcome these. One of the future research goals of our project is also concerned with finding automatic methods for text-to-script mapping, i.e. for the alignment of text segments with script states. We consider InScript and DeScript together as a resource for studying this alignment. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.
Acknowledgements
This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding'. | Unanswerable |
18fbf9c08075e3b696237d22473c463237d153f5 | 18fbf9c08075e3b696237d22473c463237d153f5_0 | Q: Did the annotators agreed and how much?
Text: Motivation
A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.
Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath.
A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts.
This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference.
The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type.
Collection via Amazon M-Turk
We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .
Texts were collected via the Amazon Mechanical Turk platform, which provides an opportunity to present an online task to humans (a.k.a. turkers). In order to gauge the effect of different M-Turk instructions on our task, we first conducted pilot experiments with different variants of instructions explaining the task. We finalized the instructions for the full data collection, asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words. The selected instruction variant resulted in comparably simple and explicit scenario-related stories. In the future we plan to collect more complex stories using different instructions. In total 190 turkers participated. All turkers were living in the USA and native speakers of English. We paid USD $0.50 per story to each turker. On average, the turkers took 9.37 minutes per story with a maximum duration of 17.38 minutes.
Data Statistics
Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.
For both flying in an airplane and baking a cake, the standard deviation is higher in comparison to other scenarios. This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios. In general, different people tend to describe situations subjectively, with a varying degree of detail. In contrast, texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens. Both planting a tree and taking a bath are simpler activities, which results in generally less complex texts.
The average pairwise word type overlap can be seen as a measure of lexical variety among stories: If it is high, the stories resemble each other more. We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here, indicating that most turkers used a similar vocabulary in their stories.
In general, the response quality was good. We had to discard 9% of the stories as these lacked the quality we were expecting. In total, we selected 910 stories for annotation.
Annotation
This section deals with the annotation of the data. We first describe the final annotation schema. Then, we describe the iterative process of corpus annotation and the refinement of the schema. This refinement was necessary due to the complexity of the annotation.
Annotation Schema
For each of the scenarios, we designed a specific annotation template. A script template consists of scenario-specific event and participant labels. An example of a template is shown in Table 1 . All NP heads in the corpus were annotated with a participant label; all verbs were annotated with an event label. For both participants and events, we also offered the label unclear if the annotator could not assign another label. We additionally annotated coreference chains between NPs. Thus, the process resulted in three layers of annotation: event types, participant types and coreference annotation. These are described in detail below.
As a first layer, we annotated event types. There are two kinds of event type labels, scenario-specific event type labels and general labels. The general labels are used across every scenario and mark general features, for example whether an event belongs to the scenario at all. For the scenario-specific labels, we designed an unique template for every scenario, with a list of script-relevant event types that were used as labels. Such labels include for example ScrEv_close_drain in taking a bath as in Example UID10 (see Figure 1 for a complete list for the taking a bath scenario)
I start by closing $_{\textsc {\scriptsize ScrEv\_close\_drain}}$ the drain at the bottom of the tub.
The general labels that were used in addition to the script-specific labels in every scenario are listed below:
ScrEv_other. An event that belongs to the scenario, but its event type occurs too infrequently (for details, see below, Section "Modification of the Schema" ). We used the label “other" because event classification would become too finegrained otherwise.
Example: After I am dried I put my new clothes on and clean up $_{\textsc {\scriptsize ScrEv\_other}}$ the bathroom.
RelNScrEv. Related non-script event. An event that can plausibly happen during the execution of the script and is related to it, but that is not part of the script.
Example: After finding on what I wanted to wear, I went into the bathroom and shut $_{\textsc {\scriptsize RelNScrEv}}$ the door.
UnrelEv. An event that is unrelated to the script.
Example: I sank into the bubbles and took $_{\textsc {\scriptsize UnrelEv}}$ a deep breath.
Additionally, the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking, as shown in Example UID10 . Today I took a bath $_{\textsc {\scriptsize Evoking}}$ in my new apartment.
As in the case of the event type labels, there are two kinds of participant labels: general labels and scenario-specific labels. The latter are part of the scenario-specific templates, e.g. ScrPart_drain in the taking a bath scenario, as can be seen in Example UID15 .
I start by closing the drain $_{\textsc {\scriptsize ScrPart\_drain}}$ at the bottom of the tub.
The general labels that are used across all scenarios mark noun phrases with scenario-independent features. There are the following general labels:
ScrPart_other. A participant that belongs to the scenario, but its participant type occurs only infrequently.
Example: I find my bath mat $_{\textsc {\scriptsize ScrPart\_other}}$ and lay it on the floor to keep the floor dry.
NPart. Non-participant. A referential NP that does not belong to the scenario.
Example: I washed myself carefully because I did not want to spill water onto the floor $_{\textsc {\scriptsize NPart}}$ .labeled
SuppVComp. A support verb complement. For further discussion of this label, see Section "Special Cases"
Example: I sank into the bubbles and took a deep breath $_{\textsc {\scriptsize SuppVComp}}$ .
Head_of_Partitive. The head of a partitive or a partitive-like construction. For a further discussion of this label cf. Section "Special Cases"
Example: I grabbed a bar $_{\textsc {\scriptsize Head\_of\_Partitive}}$ of soap and lathered my body.
No_label. A non-referential noun phrase that cannot be labeled with another label. Example: I sat for a moment $_{\textsc {\scriptsize No\_label}}$ , relaxing, allowing the warm water to sooth my skin.
All NPs labeled with one of the labels SuppVComp, Head_of_Partitive or No_label are considered to be non-referential. No_label is used mainly in four cases in our data: non-referential time expressions (in a while, a million times better), idioms (no matter what), the non-referential “it” (it felt amazing, it is better) and other abstracta (a lot better, a little bit).
In the first annotation phase, annotators were asked to mark verbs and noun phrases that have an event or participant type, that is not listed in the template, as MissScrEv/ MissScrPart (missing script event or participant, resp.). These annotations were used as a basis for extending the templates (see Section "Modification of the Schema" ) and replaced later by newly introduced labels or ScrEv_other and ScrPart_other respectively.
All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent. The annotation was done by linking heads of NPs (see Example UID21 , where the links are indicated by coindexing). As a rule, we assume that each element of a coreference chain is marked with the same participant type label.
I $ _{\textsc {\scriptsize Coref1}}$ washed my $ _{\textsc {\scriptsize Coref1}}$ entire body $ _{\textsc {\scriptsize Coref2}}$ , starting with my $ _{\textsc {\scriptsize Coref1}}$ face $ _{\textsc {\scriptsize Coref3}} $ and ending with the toes $ _{\textsc {\scriptsize Coref4}} $ . I $ _{\textsc {\scriptsize Coref1}}$ always wash my $ _{\textsc {\scriptsize Coref1}}$ toes $_{\textsc {\scriptsize Coref4}}$ very thoroughly ...
The assignment of an entity to a referent is not always trivial, as is shown in Example UID21 . There are some cases in which two discourse referents are grouped in a plural NP. In the example, those things refers to the group made up of shampoo, soap and sponge. In this case, we asked annotators to introduce a new coreference label, the name of which indicates which referents are grouped together (Coref_group_washing_tools). All NPs are then connected to the group phrase, resulting in an additional coreference chain.
I $ _{\textsc {\scriptsize Coref1}}$ made sure that I $ _{\textsc {\scriptsize Coref1}}$ have my $ _{\textsc {\scriptsize Coref1}}$ shampoo $ _{\textsc {\scriptsize Coref2 + Coref\_group\_washing\_tools}}$ , soap $_{\textsc {\scriptsize Coref3 + Coref\_group\_washing\_tools}}$ and sponge $ _{\textsc {\scriptsize Coref4 + Coref\_group\_washing\_tools}}$ ready to get in. Once I $ _{\textsc {\scriptsize Coref1}}$ have those things $ _{\textsc {\scriptsize Coref\_group\_washing\_tools}}$ I $ _{\textsc {\scriptsize Coref1}}$ sink into the bath. ... I $ _{\textsc {\scriptsize Coref1}}$ applied some soap $ _{\textsc {\scriptsize Coref1}}$0 on my $ _{\textsc {\scriptsize Coref1}}$1 body and used the sponge $ _{\textsc {\scriptsize Coref1}}$2 to scrub a bit. ... I $ _{\textsc {\scriptsize Coref1}}$3 rinsed the shampoo $ _{\textsc {\scriptsize Coref1}}$4 . Example UID21 thus contains the following coreference chains: Coref1: I $ _{\textsc {\scriptsize Coref1}}$5 I $ _{\textsc {\scriptsize Coref1}}$6 my $ _{\textsc {\scriptsize Coref1}}$7 I $ _{\textsc {\scriptsize Coref1}}$8 I $ _{\textsc {\scriptsize Coref1}}$9 I $ _{\textsc {\scriptsize Coref1}}$0 my $ _{\textsc {\scriptsize Coref1}}$1 I
Coref2: shampoo $\rightarrow $ shampoo
Coref3: soap $\rightarrow $ soap
Coref4: sponge $\rightarrow $ sponge
Coref_group_washing_ tools: shampoo $\rightarrow $ soap $\rightarrow $ sponge $\rightarrow $ things
Development of the Schema
The templates were carefully designed in an iterated process. For each scenario, one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories. For a subset of the scenarios, preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point. Subsequently, the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template. Necessary extensions and changes in the templates were discussed and agreed upon. Most of the cases of disagreement were related to the granularity of the event and participant types. We agreed on the script-specific functional equivalence as a guiding principle. For example, reading a book, listening to music and having a conversation are subsumed under the same event label in the flight scenario, because they have the common function of in-flight entertainment in the scenario. In contrast, we assumed different labels for the cake tin and other utensils (bowls etc.), since they have different functions in the baking a cake scenario and accordingly occur with different script events.
Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary. Instead, scripts describe cognitively prominent events in an activity. An example can be found in the flight scenario. While more than a third of the turkers mentioned the event of fastening the seat belts in the plane (buckle_seat_belt), no person wrote about undoing their seat belts again, although in reality both events appear equally often. Consequently, we added an event type label for buckling up, but no label for undoing the seat belts.
First Annotation Phase
We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section "Inter-Annotator Agreement" ). It was found to be sufficiently high.
Annotation of the corpus together with some pre- and post-processing of the data required about 500 hours of work. All stories were annotated with event and participant types (a total of 12,188 and 43,946 instances, respectively). On average there were 7 coreference chains per story with an average length of 6 tokens.
Modification of the Schema
After the first annotation round, we extended and changed the templates based on the results. As mentioned before, we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates. Based on the instances with these labels (a total of 941 and 1717 instances, respectively), we extended the guidelines to cover the sufficiently frequent cases. In order to include new labels for event and participant types, we tried to estimate the number of instances that would fall under a certain label. We added new labels according to the following conditions:
For the participant annotations, we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories (i.e. in approximately 5% of the stories).
For the event annotations, we chose those new labels for event types that would appear in at least 5 different stories.
In order to avoid too fine a granularity of the templates, all other instances of MissScrEv and MissScrPart were re-labeled with ScrEv_other and ScrPart_other. We also relabeled participants and events from the first annotation phase with ScrEv_other and ScrPart_other, if they did not meet the frequency requirements. The event label air_bathroom (the event of letting fresh air into the room after the bath), for example, was only used once in the stories, so we relabeled that instance to ScrEv_other.
Additionally, we looked at the DeScript corpus BIBREF3 , which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript (see Section "Comparison to the DeScript Corpus" ). Every such set contains event descriptions that describe a certain event type. We extended our templates with additional labels for these events, if they were not yet part of the template.
Special Cases
Noun-noun compounds were annotated twice with the same label (whole span plus the head noun), as indicated by Example UID31 . This redundant double annotation is motivated by potential processing requirements.
I get my (wash (cloth $ _{\textsc {\scriptsize ScrPart\_washing\_tools}} ))$ , $_{\textsc {\scriptsize ScrPart\_washing\_tools}} $ and put it under the water.
A special treatment was given to support verb constructions such as take time, get home or take a seat in Example UID32 . The semantics of the verb itself is highly underspecified in such constructions; the event type is largely dependent on the object NP. As shown in Example UID32 , we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp (support verb complement), indicating that it does not have a proper reference.
I step into the tub and take $ _{\textsc {\scriptsize ScrEv\_sink\_water}} $ a seat $ _{\textsc {\scriptsize SuppVComp}} $ .
We used the Head_of_Partitive label for the heads in partitive constructions, assuming that the only referential part of the construction is the complement. This is not completely correct, since different partitive heads vary in their degree of concreteness (cf. Examples UID33 and UID33 ), but we did not see a way to make the distinction sufficiently transparent to the annotators. Our seats were at the back $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of the train $ _{\textsc {\scriptsize ScrPart\_train}} $ . In the library you can always find a couple $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of interesting books $ _{\textsc {\scriptsize ScrPart\_book}} $ .
Group denoting NPs sometimes refer to groups whose members are instances of different participant types. In Example UID34 , the first-person plural pronoun refers to the group consisting of the passenger (I) and a non-participant (my friend). To avoid a proliferation of event type labels, we labeled these cases with Unclear.
I $ _{\textsc {\scriptsize {ScrPart\_passenger}}}$ wanted to visit my $_{\textsc {\scriptsize {ScrPart\_passenger}}}$ friend $ _{\textsc {\scriptsize {NPart}}}$ in New York. ... We $_{\textsc {\scriptsize Unclear}}$ met at the train station.
We made an exception for the Getting a Haircut scenario, where the mixed participant group consisting of the hairdresser and the customer occurs very often, as in Example UID34 . Here, we introduced the additional ad-hoc participant label Scr_Part_hairdresser_customer.
While Susan $_{\textsc {\scriptsize {ScrPart\_hairdresser}}}$ is cutting my $_{\textsc {\scriptsize {ScrPart\_customer}}}$ hair we $_{\textsc {\scriptsize Scr\_Part\_hairdresser\_customer}}$ usually talk a bit.
Inter-Annotator Agreement
In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.
For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement.
Annotated Corpus Statistics
Figure 5 gives an overview of the number of event and participant types provided in the templates. Taking a flight and getting a haircut stand out with a large number of both event and participant types, which is due to the inherent complexity of the scenarios. In contrast, planting a tree and going on a train contain the fewest labels. There are 19 event and participant types on average.
Figure 6 presents overview statistics about the usage of event labels, participant labels and coreference chain annotations. As can be seen, there are usually many more mentions of participants than events. For coreference chains, there are some chains that are really long (which also results in a large scenario-wise standard deviation). Usually, these chains describe the protagonist.
We also found again that the flying in an airplane scenario stands out in terms of participant mentions, event mentions and average number of coreference chains.
Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in. This indicates how relevant a participant is for the script. As can be seen, a small number of participants are highly prominent: cook, ingredients and cake are mentioned in every story. The fact that the protagonist appears most often consistently holds for all other scenarios, where the acting person appears in every story, and is mentioned most frequently.
Figure 8 shows the distribution of participant/event type labels over all appearances over all scenarios on average. The groups stand for the most frequently appearing label, the top 2 to 5 labels in terms of frequency and the top 6 to 10. ScrEv_other and ScrPart_other are shown separately. As can be seen, the most frequently used participant label (the protagonist) makes up about 40% of overall participant instances. The four labels that follow the protagonist in terms of frequency together appear in 37% of the cases. More than 2 out of 3 participants in total belong to one of only 5 labels.
In contrast, the distribution for events is more balanced. 14% of all event instances have the most prominent event type. ScrEv_other and ScrPart_other both appear as labels in at most 5% of all event and participant instantiations: The specific event and participant type labels in our templates cover by far most of the instances.
In Figure 9 , we grouped participants similarly into the first, the top 2-5 and top 6-10 most frequently appearing participant types. The figure shows for each of these groups the average frequency per story, and in the rightmost column the overall average. The results correspond to the findings from the last paragraph.
Comparison to the DeScript Corpus
As mentioned previously, the InScript corpus is part of a larger research project, in which also a corpus of a different kind, the DeScript corpus, was created. DeScript covers 40 scenarios, and also contains the 10 scenarios from InScript. This corpus contains texts that describe scripts on an abstract and generic level, while InScript contains instantiations of scripts in narrative texts. Script events in DeScript are described in a very simple, telegram-style language (see Figure 2 ). Since one of the long-term goals of the project is to align the InScript texts with the script structure given from DeScript, it is interesting to compare both resources.
The InScript corpus exhibits much more lexical variation than DeScript. Many approaches use the type-token ratio to measure this variance. However, this measure is known to be sensitive to text length (see e.g. Tweedie1998), which would result in very small values for InScript and relatively large ones for DeScript, given the large average difference of text lengths between the corpora. Instead, we decided to use the Measure of Textual Lexical Diversity (MTLD) (McCarthy2010, McCarthy2005), which is familiar in corpus linguistics. This metric measures the average number of tokens in a text that are needed to retain a type-token ratio above a certain threshold. If the MTLD for a text is high, many tokens are needed to lower the type-token ratio under the threshold, so the text is lexically diverse. In contrast, a low MTLD indicates that only a few words are needed to make the type-token ratio drop, so the lexical diversity is smaller. We use the threshold of 0.71, which is proposed by the authors as a well-proven value.
Figure 10 compares the lexical diversity of both resources. As can be seen, the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions, across all scenarios. For both resources, the flying in an airplane scenario is most diverse (as was also indicated above by the mean word type overlap). However, the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript. Thus, the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts. We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events. We excluded events for which there were less than 10 occurrences in DeScript or InScript. Since there is only an event annotation for 50 ESDs per scenario in DeScript, we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable.
Figure 11 shows as an example the entropy values for the event types in the going on a train scenario. As can be seen in the graph, the entropy for InScript is in general higher than for DeScript. In the stories, a wider variety of verbs is used to describe events. There are also large differences between events: While wait has a really low entropy, spend_time_train has an extremely high entropy value. This event type covers many different activities such as reading, sleeping etc.
Conclusion
In this paper we described the InScript corpus of 1,000 narrative texts annotated with script structure and coreference information. We described the annotation process, various difficulties encountered during annotation and different remedies that were taken to overcome these. One of the future research goals of our project is also concerned with finding automatic methods for text-to-script mapping, i.e. for the alignment of text segments with script states. We consider InScript and DeScript together as a resource for studying this alignment. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.
Acknowledgements
This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding'. | For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%. |
18fbf9c08075e3b696237d22473c463237d153f5 | 18fbf9c08075e3b696237d22473c463237d153f5_1 | Q: Did the annotators agreed and how much?
Text: Motivation
A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.
Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath.
A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts.
This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference.
The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type.
Collection via Amazon M-Turk
We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .
Texts were collected via the Amazon Mechanical Turk platform, which provides an opportunity to present an online task to humans (a.k.a. turkers). In order to gauge the effect of different M-Turk instructions on our task, we first conducted pilot experiments with different variants of instructions explaining the task. We finalized the instructions for the full data collection, asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words. The selected instruction variant resulted in comparably simple and explicit scenario-related stories. In the future we plan to collect more complex stories using different instructions. In total 190 turkers participated. All turkers were living in the USA and native speakers of English. We paid USD $0.50 per story to each turker. On average, the turkers took 9.37 minutes per story with a maximum duration of 17.38 minutes.
Data Statistics
Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.
For both flying in an airplane and baking a cake, the standard deviation is higher in comparison to other scenarios. This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios. In general, different people tend to describe situations subjectively, with a varying degree of detail. In contrast, texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens. Both planting a tree and taking a bath are simpler activities, which results in generally less complex texts.
The average pairwise word type overlap can be seen as a measure of lexical variety among stories: If it is high, the stories resemble each other more. We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here, indicating that most turkers used a similar vocabulary in their stories.
In general, the response quality was good. We had to discard 9% of the stories as these lacked the quality we were expecting. In total, we selected 910 stories for annotation.
Annotation
This section deals with the annotation of the data. We first describe the final annotation schema. Then, we describe the iterative process of corpus annotation and the refinement of the schema. This refinement was necessary due to the complexity of the annotation.
Annotation Schema
For each of the scenarios, we designed a specific annotation template. A script template consists of scenario-specific event and participant labels. An example of a template is shown in Table 1 . All NP heads in the corpus were annotated with a participant label; all verbs were annotated with an event label. For both participants and events, we also offered the label unclear if the annotator could not assign another label. We additionally annotated coreference chains between NPs. Thus, the process resulted in three layers of annotation: event types, participant types and coreference annotation. These are described in detail below.
As a first layer, we annotated event types. There are two kinds of event type labels, scenario-specific event type labels and general labels. The general labels are used across every scenario and mark general features, for example whether an event belongs to the scenario at all. For the scenario-specific labels, we designed an unique template for every scenario, with a list of script-relevant event types that were used as labels. Such labels include for example ScrEv_close_drain in taking a bath as in Example UID10 (see Figure 1 for a complete list for the taking a bath scenario)
I start by closing $_{\textsc {\scriptsize ScrEv\_close\_drain}}$ the drain at the bottom of the tub.
The general labels that were used in addition to the script-specific labels in every scenario are listed below:
ScrEv_other. An event that belongs to the scenario, but its event type occurs too infrequently (for details, see below, Section "Modification of the Schema" ). We used the label “other" because event classification would become too finegrained otherwise.
Example: After I am dried I put my new clothes on and clean up $_{\textsc {\scriptsize ScrEv\_other}}$ the bathroom.
RelNScrEv. Related non-script event. An event that can plausibly happen during the execution of the script and is related to it, but that is not part of the script.
Example: After finding on what I wanted to wear, I went into the bathroom and shut $_{\textsc {\scriptsize RelNScrEv}}$ the door.
UnrelEv. An event that is unrelated to the script.
Example: I sank into the bubbles and took $_{\textsc {\scriptsize UnrelEv}}$ a deep breath.
Additionally, the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking, as shown in Example UID10 . Today I took a bath $_{\textsc {\scriptsize Evoking}}$ in my new apartment.
As in the case of the event type labels, there are two kinds of participant labels: general labels and scenario-specific labels. The latter are part of the scenario-specific templates, e.g. ScrPart_drain in the taking a bath scenario, as can be seen in Example UID15 .
I start by closing the drain $_{\textsc {\scriptsize ScrPart\_drain}}$ at the bottom of the tub.
The general labels that are used across all scenarios mark noun phrases with scenario-independent features. There are the following general labels:
ScrPart_other. A participant that belongs to the scenario, but its participant type occurs only infrequently.
Example: I find my bath mat $_{\textsc {\scriptsize ScrPart\_other}}$ and lay it on the floor to keep the floor dry.
NPart. Non-participant. A referential NP that does not belong to the scenario.
Example: I washed myself carefully because I did not want to spill water onto the floor $_{\textsc {\scriptsize NPart}}$ .labeled
SuppVComp. A support verb complement. For further discussion of this label, see Section "Special Cases"
Example: I sank into the bubbles and took a deep breath $_{\textsc {\scriptsize SuppVComp}}$ .
Head_of_Partitive. The head of a partitive or a partitive-like construction. For a further discussion of this label cf. Section "Special Cases"
Example: I grabbed a bar $_{\textsc {\scriptsize Head\_of\_Partitive}}$ of soap and lathered my body.
No_label. A non-referential noun phrase that cannot be labeled with another label. Example: I sat for a moment $_{\textsc {\scriptsize No\_label}}$ , relaxing, allowing the warm water to sooth my skin.
All NPs labeled with one of the labels SuppVComp, Head_of_Partitive or No_label are considered to be non-referential. No_label is used mainly in four cases in our data: non-referential time expressions (in a while, a million times better), idioms (no matter what), the non-referential “it” (it felt amazing, it is better) and other abstracta (a lot better, a little bit).
In the first annotation phase, annotators were asked to mark verbs and noun phrases that have an event or participant type, that is not listed in the template, as MissScrEv/ MissScrPart (missing script event or participant, resp.). These annotations were used as a basis for extending the templates (see Section "Modification of the Schema" ) and replaced later by newly introduced labels or ScrEv_other and ScrPart_other respectively.
All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent. The annotation was done by linking heads of NPs (see Example UID21 , where the links are indicated by coindexing). As a rule, we assume that each element of a coreference chain is marked with the same participant type label.
I $ _{\textsc {\scriptsize Coref1}}$ washed my $ _{\textsc {\scriptsize Coref1}}$ entire body $ _{\textsc {\scriptsize Coref2}}$ , starting with my $ _{\textsc {\scriptsize Coref1}}$ face $ _{\textsc {\scriptsize Coref3}} $ and ending with the toes $ _{\textsc {\scriptsize Coref4}} $ . I $ _{\textsc {\scriptsize Coref1}}$ always wash my $ _{\textsc {\scriptsize Coref1}}$ toes $_{\textsc {\scriptsize Coref4}}$ very thoroughly ...
The assignment of an entity to a referent is not always trivial, as is shown in Example UID21 . There are some cases in which two discourse referents are grouped in a plural NP. In the example, those things refers to the group made up of shampoo, soap and sponge. In this case, we asked annotators to introduce a new coreference label, the name of which indicates which referents are grouped together (Coref_group_washing_tools). All NPs are then connected to the group phrase, resulting in an additional coreference chain.
I $ _{\textsc {\scriptsize Coref1}}$ made sure that I $ _{\textsc {\scriptsize Coref1}}$ have my $ _{\textsc {\scriptsize Coref1}}$ shampoo $ _{\textsc {\scriptsize Coref2 + Coref\_group\_washing\_tools}}$ , soap $_{\textsc {\scriptsize Coref3 + Coref\_group\_washing\_tools}}$ and sponge $ _{\textsc {\scriptsize Coref4 + Coref\_group\_washing\_tools}}$ ready to get in. Once I $ _{\textsc {\scriptsize Coref1}}$ have those things $ _{\textsc {\scriptsize Coref\_group\_washing\_tools}}$ I $ _{\textsc {\scriptsize Coref1}}$ sink into the bath. ... I $ _{\textsc {\scriptsize Coref1}}$ applied some soap $ _{\textsc {\scriptsize Coref1}}$0 on my $ _{\textsc {\scriptsize Coref1}}$1 body and used the sponge $ _{\textsc {\scriptsize Coref1}}$2 to scrub a bit. ... I $ _{\textsc {\scriptsize Coref1}}$3 rinsed the shampoo $ _{\textsc {\scriptsize Coref1}}$4 . Example UID21 thus contains the following coreference chains: Coref1: I $ _{\textsc {\scriptsize Coref1}}$5 I $ _{\textsc {\scriptsize Coref1}}$6 my $ _{\textsc {\scriptsize Coref1}}$7 I $ _{\textsc {\scriptsize Coref1}}$8 I $ _{\textsc {\scriptsize Coref1}}$9 I $ _{\textsc {\scriptsize Coref1}}$0 my $ _{\textsc {\scriptsize Coref1}}$1 I
Coref2: shampoo $\rightarrow $ shampoo
Coref3: soap $\rightarrow $ soap
Coref4: sponge $\rightarrow $ sponge
Coref_group_washing_ tools: shampoo $\rightarrow $ soap $\rightarrow $ sponge $\rightarrow $ things
Development of the Schema
The templates were carefully designed in an iterated process. For each scenario, one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories. For a subset of the scenarios, preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point. Subsequently, the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template. Necessary extensions and changes in the templates were discussed and agreed upon. Most of the cases of disagreement were related to the granularity of the event and participant types. We agreed on the script-specific functional equivalence as a guiding principle. For example, reading a book, listening to music and having a conversation are subsumed under the same event label in the flight scenario, because they have the common function of in-flight entertainment in the scenario. In contrast, we assumed different labels for the cake tin and other utensils (bowls etc.), since they have different functions in the baking a cake scenario and accordingly occur with different script events.
Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary. Instead, scripts describe cognitively prominent events in an activity. An example can be found in the flight scenario. While more than a third of the turkers mentioned the event of fastening the seat belts in the plane (buckle_seat_belt), no person wrote about undoing their seat belts again, although in reality both events appear equally often. Consequently, we added an event type label for buckling up, but no label for undoing the seat belts.
First Annotation Phase
We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section "Inter-Annotator Agreement" ). It was found to be sufficiently high.
Annotation of the corpus together with some pre- and post-processing of the data required about 500 hours of work. All stories were annotated with event and participant types (a total of 12,188 and 43,946 instances, respectively). On average there were 7 coreference chains per story with an average length of 6 tokens.
Modification of the Schema
After the first annotation round, we extended and changed the templates based on the results. As mentioned before, we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates. Based on the instances with these labels (a total of 941 and 1717 instances, respectively), we extended the guidelines to cover the sufficiently frequent cases. In order to include new labels for event and participant types, we tried to estimate the number of instances that would fall under a certain label. We added new labels according to the following conditions:
For the participant annotations, we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories (i.e. in approximately 5% of the stories).
For the event annotations, we chose those new labels for event types that would appear in at least 5 different stories.
In order to avoid too fine a granularity of the templates, all other instances of MissScrEv and MissScrPart were re-labeled with ScrEv_other and ScrPart_other. We also relabeled participants and events from the first annotation phase with ScrEv_other and ScrPart_other, if they did not meet the frequency requirements. The event label air_bathroom (the event of letting fresh air into the room after the bath), for example, was only used once in the stories, so we relabeled that instance to ScrEv_other.
Additionally, we looked at the DeScript corpus BIBREF3 , which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript (see Section "Comparison to the DeScript Corpus" ). Every such set contains event descriptions that describe a certain event type. We extended our templates with additional labels for these events, if they were not yet part of the template.
Special Cases
Noun-noun compounds were annotated twice with the same label (whole span plus the head noun), as indicated by Example UID31 . This redundant double annotation is motivated by potential processing requirements.
I get my (wash (cloth $ _{\textsc {\scriptsize ScrPart\_washing\_tools}} ))$ , $_{\textsc {\scriptsize ScrPart\_washing\_tools}} $ and put it under the water.
A special treatment was given to support verb constructions such as take time, get home or take a seat in Example UID32 . The semantics of the verb itself is highly underspecified in such constructions; the event type is largely dependent on the object NP. As shown in Example UID32 , we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp (support verb complement), indicating that it does not have a proper reference.
I step into the tub and take $ _{\textsc {\scriptsize ScrEv\_sink\_water}} $ a seat $ _{\textsc {\scriptsize SuppVComp}} $ .
We used the Head_of_Partitive label for the heads in partitive constructions, assuming that the only referential part of the construction is the complement. This is not completely correct, since different partitive heads vary in their degree of concreteness (cf. Examples UID33 and UID33 ), but we did not see a way to make the distinction sufficiently transparent to the annotators. Our seats were at the back $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of the train $ _{\textsc {\scriptsize ScrPart\_train}} $ . In the library you can always find a couple $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of interesting books $ _{\textsc {\scriptsize ScrPart\_book}} $ .
Group denoting NPs sometimes refer to groups whose members are instances of different participant types. In Example UID34 , the first-person plural pronoun refers to the group consisting of the passenger (I) and a non-participant (my friend). To avoid a proliferation of event type labels, we labeled these cases with Unclear.
I $ _{\textsc {\scriptsize {ScrPart\_passenger}}}$ wanted to visit my $_{\textsc {\scriptsize {ScrPart\_passenger}}}$ friend $ _{\textsc {\scriptsize {NPart}}}$ in New York. ... We $_{\textsc {\scriptsize Unclear}}$ met at the train station.
We made an exception for the Getting a Haircut scenario, where the mixed participant group consisting of the hairdresser and the customer occurs very often, as in Example UID34 . Here, we introduced the additional ad-hoc participant label Scr_Part_hairdresser_customer.
While Susan $_{\textsc {\scriptsize {ScrPart\_hairdresser}}}$ is cutting my $_{\textsc {\scriptsize {ScrPart\_customer}}}$ hair we $_{\textsc {\scriptsize Scr\_Part\_hairdresser\_customer}}$ usually talk a bit.
Inter-Annotator Agreement
In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.
For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement.
Annotated Corpus Statistics
Figure 5 gives an overview of the number of event and participant types provided in the templates. Taking a flight and getting a haircut stand out with a large number of both event and participant types, which is due to the inherent complexity of the scenarios. In contrast, planting a tree and going on a train contain the fewest labels. There are 19 event and participant types on average.
Figure 6 presents overview statistics about the usage of event labels, participant labels and coreference chain annotations. As can be seen, there are usually many more mentions of participants than events. For coreference chains, there are some chains that are really long (which also results in a large scenario-wise standard deviation). Usually, these chains describe the protagonist.
We also found again that the flying in an airplane scenario stands out in terms of participant mentions, event mentions and average number of coreference chains.
Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in. This indicates how relevant a participant is for the script. As can be seen, a small number of participants are highly prominent: cook, ingredients and cake are mentioned in every story. The fact that the protagonist appears most often consistently holds for all other scenarios, where the acting person appears in every story, and is mentioned most frequently.
Figure 8 shows the distribution of participant/event type labels over all appearances over all scenarios on average. The groups stand for the most frequently appearing label, the top 2 to 5 labels in terms of frequency and the top 6 to 10. ScrEv_other and ScrPart_other are shown separately. As can be seen, the most frequently used participant label (the protagonist) makes up about 40% of overall participant instances. The four labels that follow the protagonist in terms of frequency together appear in 37% of the cases. More than 2 out of 3 participants in total belong to one of only 5 labels.
In contrast, the distribution for events is more balanced. 14% of all event instances have the most prominent event type. ScrEv_other and ScrPart_other both appear as labels in at most 5% of all event and participant instantiations: The specific event and participant type labels in our templates cover by far most of the instances.
In Figure 9 , we grouped participants similarly into the first, the top 2-5 and top 6-10 most frequently appearing participant types. The figure shows for each of these groups the average frequency per story, and in the rightmost column the overall average. The results correspond to the findings from the last paragraph.
Comparison to the DeScript Corpus
As mentioned previously, the InScript corpus is part of a larger research project, in which also a corpus of a different kind, the DeScript corpus, was created. DeScript covers 40 scenarios, and also contains the 10 scenarios from InScript. This corpus contains texts that describe scripts on an abstract and generic level, while InScript contains instantiations of scripts in narrative texts. Script events in DeScript are described in a very simple, telegram-style language (see Figure 2 ). Since one of the long-term goals of the project is to align the InScript texts with the script structure given from DeScript, it is interesting to compare both resources.
The InScript corpus exhibits much more lexical variation than DeScript. Many approaches use the type-token ratio to measure this variance. However, this measure is known to be sensitive to text length (see e.g. Tweedie1998), which would result in very small values for InScript and relatively large ones for DeScript, given the large average difference of text lengths between the corpora. Instead, we decided to use the Measure of Textual Lexical Diversity (MTLD) (McCarthy2010, McCarthy2005), which is familiar in corpus linguistics. This metric measures the average number of tokens in a text that are needed to retain a type-token ratio above a certain threshold. If the MTLD for a text is high, many tokens are needed to lower the type-token ratio under the threshold, so the text is lexically diverse. In contrast, a low MTLD indicates that only a few words are needed to make the type-token ratio drop, so the lexical diversity is smaller. We use the threshold of 0.71, which is proposed by the authors as a well-proven value.
Figure 10 compares the lexical diversity of both resources. As can be seen, the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions, across all scenarios. For both resources, the flying in an airplane scenario is most diverse (as was also indicated above by the mean word type overlap). However, the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript. Thus, the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts. We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events. We excluded events for which there were less than 10 occurrences in DeScript or InScript. Since there is only an event annotation for 50 ESDs per scenario in DeScript, we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable.
Figure 11 shows as an example the entropy values for the event types in the going on a train scenario. As can be seen in the graph, the entropy for InScript is in general higher than for DeScript. In the stories, a wider variety of verbs is used to describe events. There are also large differences between events: While wait has a really low entropy, spend_time_train has an extremely high entropy value. This event type covers many different activities such as reading, sleeping etc.
Conclusion
In this paper we described the InScript corpus of 1,000 narrative texts annotated with script structure and coreference information. We described the annotation process, various difficulties encountered during annotation and different remedies that were taken to overcome these. One of the future research goals of our project is also concerned with finding automatic methods for text-to-script mapping, i.e. for the alignment of text segments with script states. We consider InScript and DeScript together as a resource for studying this alignment. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.
Acknowledgements
This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding'. | Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and good agreement of 90.5% over coreference information. |
a37ef83ab6bcc6faff3c70a481f26174ccd40489 | a37ef83ab6bcc6faff3c70a481f26174ccd40489_0 | Q: How many subjects have been used to create the annotations?
Text: Motivation
A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.
Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath.
A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts.
This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference.
The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type.
Collection via Amazon M-Turk
We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .
Texts were collected via the Amazon Mechanical Turk platform, which provides an opportunity to present an online task to humans (a.k.a. turkers). In order to gauge the effect of different M-Turk instructions on our task, we first conducted pilot experiments with different variants of instructions explaining the task. We finalized the instructions for the full data collection, asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words. The selected instruction variant resulted in comparably simple and explicit scenario-related stories. In the future we plan to collect more complex stories using different instructions. In total 190 turkers participated. All turkers were living in the USA and native speakers of English. We paid USD $0.50 per story to each turker. On average, the turkers took 9.37 minutes per story with a maximum duration of 17.38 minutes.
Data Statistics
Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.
For both flying in an airplane and baking a cake, the standard deviation is higher in comparison to other scenarios. This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios. In general, different people tend to describe situations subjectively, with a varying degree of detail. In contrast, texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens. Both planting a tree and taking a bath are simpler activities, which results in generally less complex texts.
The average pairwise word type overlap can be seen as a measure of lexical variety among stories: If it is high, the stories resemble each other more. We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here, indicating that most turkers used a similar vocabulary in their stories.
In general, the response quality was good. We had to discard 9% of the stories as these lacked the quality we were expecting. In total, we selected 910 stories for annotation.
Annotation
This section deals with the annotation of the data. We first describe the final annotation schema. Then, we describe the iterative process of corpus annotation and the refinement of the schema. This refinement was necessary due to the complexity of the annotation.
Annotation Schema
For each of the scenarios, we designed a specific annotation template. A script template consists of scenario-specific event and participant labels. An example of a template is shown in Table 1 . All NP heads in the corpus were annotated with a participant label; all verbs were annotated with an event label. For both participants and events, we also offered the label unclear if the annotator could not assign another label. We additionally annotated coreference chains between NPs. Thus, the process resulted in three layers of annotation: event types, participant types and coreference annotation. These are described in detail below.
As a first layer, we annotated event types. There are two kinds of event type labels, scenario-specific event type labels and general labels. The general labels are used across every scenario and mark general features, for example whether an event belongs to the scenario at all. For the scenario-specific labels, we designed an unique template for every scenario, with a list of script-relevant event types that were used as labels. Such labels include for example ScrEv_close_drain in taking a bath as in Example UID10 (see Figure 1 for a complete list for the taking a bath scenario)
I start by closing $_{\textsc {\scriptsize ScrEv\_close\_drain}}$ the drain at the bottom of the tub.
The general labels that were used in addition to the script-specific labels in every scenario are listed below:
ScrEv_other. An event that belongs to the scenario, but its event type occurs too infrequently (for details, see below, Section "Modification of the Schema" ). We used the label “other" because event classification would become too finegrained otherwise.
Example: After I am dried I put my new clothes on and clean up $_{\textsc {\scriptsize ScrEv\_other}}$ the bathroom.
RelNScrEv. Related non-script event. An event that can plausibly happen during the execution of the script and is related to it, but that is not part of the script.
Example: After finding on what I wanted to wear, I went into the bathroom and shut $_{\textsc {\scriptsize RelNScrEv}}$ the door.
UnrelEv. An event that is unrelated to the script.
Example: I sank into the bubbles and took $_{\textsc {\scriptsize UnrelEv}}$ a deep breath.
Additionally, the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking, as shown in Example UID10 . Today I took a bath $_{\textsc {\scriptsize Evoking}}$ in my new apartment.
As in the case of the event type labels, there are two kinds of participant labels: general labels and scenario-specific labels. The latter are part of the scenario-specific templates, e.g. ScrPart_drain in the taking a bath scenario, as can be seen in Example UID15 .
I start by closing the drain $_{\textsc {\scriptsize ScrPart\_drain}}$ at the bottom of the tub.
The general labels that are used across all scenarios mark noun phrases with scenario-independent features. There are the following general labels:
ScrPart_other. A participant that belongs to the scenario, but its participant type occurs only infrequently.
Example: I find my bath mat $_{\textsc {\scriptsize ScrPart\_other}}$ and lay it on the floor to keep the floor dry.
NPart. Non-participant. A referential NP that does not belong to the scenario.
Example: I washed myself carefully because I did not want to spill water onto the floor $_{\textsc {\scriptsize NPart}}$ .labeled
SuppVComp. A support verb complement. For further discussion of this label, see Section "Special Cases"
Example: I sank into the bubbles and took a deep breath $_{\textsc {\scriptsize SuppVComp}}$ .
Head_of_Partitive. The head of a partitive or a partitive-like construction. For a further discussion of this label cf. Section "Special Cases"
Example: I grabbed a bar $_{\textsc {\scriptsize Head\_of\_Partitive}}$ of soap and lathered my body.
No_label. A non-referential noun phrase that cannot be labeled with another label. Example: I sat for a moment $_{\textsc {\scriptsize No\_label}}$ , relaxing, allowing the warm water to sooth my skin.
All NPs labeled with one of the labels SuppVComp, Head_of_Partitive or No_label are considered to be non-referential. No_label is used mainly in four cases in our data: non-referential time expressions (in a while, a million times better), idioms (no matter what), the non-referential “it” (it felt amazing, it is better) and other abstracta (a lot better, a little bit).
In the first annotation phase, annotators were asked to mark verbs and noun phrases that have an event or participant type, that is not listed in the template, as MissScrEv/ MissScrPart (missing script event or participant, resp.). These annotations were used as a basis for extending the templates (see Section "Modification of the Schema" ) and replaced later by newly introduced labels or ScrEv_other and ScrPart_other respectively.
All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent. The annotation was done by linking heads of NPs (see Example UID21 , where the links are indicated by coindexing). As a rule, we assume that each element of a coreference chain is marked with the same participant type label.
I $ _{\textsc {\scriptsize Coref1}}$ washed my $ _{\textsc {\scriptsize Coref1}}$ entire body $ _{\textsc {\scriptsize Coref2}}$ , starting with my $ _{\textsc {\scriptsize Coref1}}$ face $ _{\textsc {\scriptsize Coref3}} $ and ending with the toes $ _{\textsc {\scriptsize Coref4}} $ . I $ _{\textsc {\scriptsize Coref1}}$ always wash my $ _{\textsc {\scriptsize Coref1}}$ toes $_{\textsc {\scriptsize Coref4}}$ very thoroughly ...
The assignment of an entity to a referent is not always trivial, as is shown in Example UID21 . There are some cases in which two discourse referents are grouped in a plural NP. In the example, those things refers to the group made up of shampoo, soap and sponge. In this case, we asked annotators to introduce a new coreference label, the name of which indicates which referents are grouped together (Coref_group_washing_tools). All NPs are then connected to the group phrase, resulting in an additional coreference chain.
I $ _{\textsc {\scriptsize Coref1}}$ made sure that I $ _{\textsc {\scriptsize Coref1}}$ have my $ _{\textsc {\scriptsize Coref1}}$ shampoo $ _{\textsc {\scriptsize Coref2 + Coref\_group\_washing\_tools}}$ , soap $_{\textsc {\scriptsize Coref3 + Coref\_group\_washing\_tools}}$ and sponge $ _{\textsc {\scriptsize Coref4 + Coref\_group\_washing\_tools}}$ ready to get in. Once I $ _{\textsc {\scriptsize Coref1}}$ have those things $ _{\textsc {\scriptsize Coref\_group\_washing\_tools}}$ I $ _{\textsc {\scriptsize Coref1}}$ sink into the bath. ... I $ _{\textsc {\scriptsize Coref1}}$ applied some soap $ _{\textsc {\scriptsize Coref1}}$0 on my $ _{\textsc {\scriptsize Coref1}}$1 body and used the sponge $ _{\textsc {\scriptsize Coref1}}$2 to scrub a bit. ... I $ _{\textsc {\scriptsize Coref1}}$3 rinsed the shampoo $ _{\textsc {\scriptsize Coref1}}$4 . Example UID21 thus contains the following coreference chains: Coref1: I $ _{\textsc {\scriptsize Coref1}}$5 I $ _{\textsc {\scriptsize Coref1}}$6 my $ _{\textsc {\scriptsize Coref1}}$7 I $ _{\textsc {\scriptsize Coref1}}$8 I $ _{\textsc {\scriptsize Coref1}}$9 I $ _{\textsc {\scriptsize Coref1}}$0 my $ _{\textsc {\scriptsize Coref1}}$1 I
Coref2: shampoo $\rightarrow $ shampoo
Coref3: soap $\rightarrow $ soap
Coref4: sponge $\rightarrow $ sponge
Coref_group_washing_ tools: shampoo $\rightarrow $ soap $\rightarrow $ sponge $\rightarrow $ things
Development of the Schema
The templates were carefully designed in an iterated process. For each scenario, one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories. For a subset of the scenarios, preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point. Subsequently, the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template. Necessary extensions and changes in the templates were discussed and agreed upon. Most of the cases of disagreement were related to the granularity of the event and participant types. We agreed on the script-specific functional equivalence as a guiding principle. For example, reading a book, listening to music and having a conversation are subsumed under the same event label in the flight scenario, because they have the common function of in-flight entertainment in the scenario. In contrast, we assumed different labels for the cake tin and other utensils (bowls etc.), since they have different functions in the baking a cake scenario and accordingly occur with different script events.
Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary. Instead, scripts describe cognitively prominent events in an activity. An example can be found in the flight scenario. While more than a third of the turkers mentioned the event of fastening the seat belts in the plane (buckle_seat_belt), no person wrote about undoing their seat belts again, although in reality both events appear equally often. Consequently, we added an event type label for buckling up, but no label for undoing the seat belts.
First Annotation Phase
We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section "Inter-Annotator Agreement" ). It was found to be sufficiently high.
Annotation of the corpus together with some pre- and post-processing of the data required about 500 hours of work. All stories were annotated with event and participant types (a total of 12,188 and 43,946 instances, respectively). On average there were 7 coreference chains per story with an average length of 6 tokens.
Modification of the Schema
After the first annotation round, we extended and changed the templates based on the results. As mentioned before, we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates. Based on the instances with these labels (a total of 941 and 1717 instances, respectively), we extended the guidelines to cover the sufficiently frequent cases. In order to include new labels for event and participant types, we tried to estimate the number of instances that would fall under a certain label. We added new labels according to the following conditions:
For the participant annotations, we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories (i.e. in approximately 5% of the stories).
For the event annotations, we chose those new labels for event types that would appear in at least 5 different stories.
In order to avoid too fine a granularity of the templates, all other instances of MissScrEv and MissScrPart were re-labeled with ScrEv_other and ScrPart_other. We also relabeled participants and events from the first annotation phase with ScrEv_other and ScrPart_other, if they did not meet the frequency requirements. The event label air_bathroom (the event of letting fresh air into the room after the bath), for example, was only used once in the stories, so we relabeled that instance to ScrEv_other.
Additionally, we looked at the DeScript corpus BIBREF3 , which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript (see Section "Comparison to the DeScript Corpus" ). Every such set contains event descriptions that describe a certain event type. We extended our templates with additional labels for these events, if they were not yet part of the template.
Special Cases
Noun-noun compounds were annotated twice with the same label (whole span plus the head noun), as indicated by Example UID31 . This redundant double annotation is motivated by potential processing requirements.
I get my (wash (cloth $ _{\textsc {\scriptsize ScrPart\_washing\_tools}} ))$ , $_{\textsc {\scriptsize ScrPart\_washing\_tools}} $ and put it under the water.
A special treatment was given to support verb constructions such as take time, get home or take a seat in Example UID32 . The semantics of the verb itself is highly underspecified in such constructions; the event type is largely dependent on the object NP. As shown in Example UID32 , we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp (support verb complement), indicating that it does not have a proper reference.
I step into the tub and take $ _{\textsc {\scriptsize ScrEv\_sink\_water}} $ a seat $ _{\textsc {\scriptsize SuppVComp}} $ .
We used the Head_of_Partitive label for the heads in partitive constructions, assuming that the only referential part of the construction is the complement. This is not completely correct, since different partitive heads vary in their degree of concreteness (cf. Examples UID33 and UID33 ), but we did not see a way to make the distinction sufficiently transparent to the annotators. Our seats were at the back $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of the train $ _{\textsc {\scriptsize ScrPart\_train}} $ . In the library you can always find a couple $ _{\textsc {\scriptsize Head\_of\_Partitive}} $ of interesting books $ _{\textsc {\scriptsize ScrPart\_book}} $ .
Group denoting NPs sometimes refer to groups whose members are instances of different participant types. In Example UID34 , the first-person plural pronoun refers to the group consisting of the passenger (I) and a non-participant (my friend). To avoid a proliferation of event type labels, we labeled these cases with Unclear.
I $ _{\textsc {\scriptsize {ScrPart\_passenger}}}$ wanted to visit my $_{\textsc {\scriptsize {ScrPart\_passenger}}}$ friend $ _{\textsc {\scriptsize {NPart}}}$ in New York. ... We $_{\textsc {\scriptsize Unclear}}$ met at the train station.
We made an exception for the Getting a Haircut scenario, where the mixed participant group consisting of the hairdresser and the customer occurs very often, as in Example UID34 . Here, we introduced the additional ad-hoc participant label Scr_Part_hairdresser_customer.
While Susan $_{\textsc {\scriptsize {ScrPart\_hairdresser}}}$ is cutting my $_{\textsc {\scriptsize {ScrPart\_customer}}}$ hair we $_{\textsc {\scriptsize Scr\_Part\_hairdresser\_customer}}$ usually talk a bit.
Inter-Annotator Agreement
In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.
For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement.
Annotated Corpus Statistics
Figure 5 gives an overview of the number of event and participant types provided in the templates. Taking a flight and getting a haircut stand out with a large number of both event and participant types, which is due to the inherent complexity of the scenarios. In contrast, planting a tree and going on a train contain the fewest labels. There are 19 event and participant types on average.
Figure 6 presents overview statistics about the usage of event labels, participant labels and coreference chain annotations. As can be seen, there are usually many more mentions of participants than events. For coreference chains, there are some chains that are really long (which also results in a large scenario-wise standard deviation). Usually, these chains describe the protagonist.
We also found again that the flying in an airplane scenario stands out in terms of participant mentions, event mentions and average number of coreference chains.
Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in. This indicates how relevant a participant is for the script. As can be seen, a small number of participants are highly prominent: cook, ingredients and cake are mentioned in every story. The fact that the protagonist appears most often consistently holds for all other scenarios, where the acting person appears in every story, and is mentioned most frequently.
Figure 8 shows the distribution of participant/event type labels over all appearances over all scenarios on average. The groups stand for the most frequently appearing label, the top 2 to 5 labels in terms of frequency and the top 6 to 10. ScrEv_other and ScrPart_other are shown separately. As can be seen, the most frequently used participant label (the protagonist) makes up about 40% of overall participant instances. The four labels that follow the protagonist in terms of frequency together appear in 37% of the cases. More than 2 out of 3 participants in total belong to one of only 5 labels.
In contrast, the distribution for events is more balanced. 14% of all event instances have the most prominent event type. ScrEv_other and ScrPart_other both appear as labels in at most 5% of all event and participant instantiations: The specific event and participant type labels in our templates cover by far most of the instances.
In Figure 9 , we grouped participants similarly into the first, the top 2-5 and top 6-10 most frequently appearing participant types. The figure shows for each of these groups the average frequency per story, and in the rightmost column the overall average. The results correspond to the findings from the last paragraph.
Comparison to the DeScript Corpus
As mentioned previously, the InScript corpus is part of a larger research project, in which also a corpus of a different kind, the DeScript corpus, was created. DeScript covers 40 scenarios, and also contains the 10 scenarios from InScript. This corpus contains texts that describe scripts on an abstract and generic level, while InScript contains instantiations of scripts in narrative texts. Script events in DeScript are described in a very simple, telegram-style language (see Figure 2 ). Since one of the long-term goals of the project is to align the InScript texts with the script structure given from DeScript, it is interesting to compare both resources.
The InScript corpus exhibits much more lexical variation than DeScript. Many approaches use the type-token ratio to measure this variance. However, this measure is known to be sensitive to text length (see e.g. Tweedie1998), which would result in very small values for InScript and relatively large ones for DeScript, given the large average difference of text lengths between the corpora. Instead, we decided to use the Measure of Textual Lexical Diversity (MTLD) (McCarthy2010, McCarthy2005), which is familiar in corpus linguistics. This metric measures the average number of tokens in a text that are needed to retain a type-token ratio above a certain threshold. If the MTLD for a text is high, many tokens are needed to lower the type-token ratio under the threshold, so the text is lexically diverse. In contrast, a low MTLD indicates that only a few words are needed to make the type-token ratio drop, so the lexical diversity is smaller. We use the threshold of 0.71, which is proposed by the authors as a well-proven value.
Figure 10 compares the lexical diversity of both resources. As can be seen, the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions, across all scenarios. For both resources, the flying in an airplane scenario is most diverse (as was also indicated above by the mean word type overlap). However, the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript. Thus, the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts. We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events. We excluded events for which there were less than 10 occurrences in DeScript or InScript. Since there is only an event annotation for 50 ESDs per scenario in DeScript, we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable.
Figure 11 shows as an example the entropy values for the event types in the going on a train scenario. As can be seen in the graph, the entropy for InScript is in general higher than for DeScript. In the stories, a wider variety of verbs is used to describe events. There are also large differences between events: While wait has a really low entropy, spend_time_train has an extremely high entropy value. This event type covers many different activities such as reading, sleeping etc.
Conclusion
In this paper we described the InScript corpus of 1,000 narrative texts annotated with script structure and coreference information. We described the annotation process, various difficulties encountered during annotation and different remedies that were taken to overcome these. One of the future research goals of our project is also concerned with finding automatic methods for text-to-script mapping, i.e. for the alignment of text segments with script states. We consider InScript and DeScript together as a resource for studying this alignment. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.
Acknowledgements
This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding'. | four different annotators |
bc9c31b3ce8126d1d148b1025c66f270581fde10 | bc9c31b3ce8126d1d148b1025c66f270581fde10_0 | Q: What datasets are used to evaluate this approach?
Text: Introduction
Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .
Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\%$ and $50.7\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\%$ accuracy in detecting errors.
Background and Notation
In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\langle s, r, o\rangle $ , where $s,o\in \xi $ , the set of entities, and $r\in $ , the set of relations. To model the KG, a scoring function $\psi :\xi \times \times \xi \rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\psi (s,r,o) = , ) \cdot $ , where $,,\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \odot $ , where $\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\in \xi $0 .
We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\sigma (\psi (s,r,o))$ for $\langle s,r,o\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o)))
+ (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any. $
Completion Robustness and Interpretability via Adversarial Graph Edits ()
For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\langle s, r, o\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ i.e $s^{\prime }$ and $r^{\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\langle s, r^{\prime }, o^{\prime }\rangle $ and $\langle s, r^{\prime }, o\rangle $ in appendices "Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle " and "Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle " , and leave empirical evaluation of these modifications for future work.
Removing a fact ()
For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\prime },r^{\prime },o}\in G$ such that the score $\psi (s,r,o)$ when trained on $G$ and the score $\overline{\psi }(s,r,o)$ when trained on $G-\lbrace {s^{\prime },r^{\prime },o}\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ , and $\text{Nei}(o)=\lbrace (s^{\prime },r^{\prime })|\langle s^{\prime },r^{\prime },o \rangle \in G \rbrace $ .
Adding a new fact ()
We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\prime },r^{\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\psi (s,r,o)$ the most. Using $\overline{\psi }(s,r,o)$ as the score after training on $G\cup \lbrace {s^{\prime },r^{\prime },o}\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ . The search here is over any possible $s^{\prime }\in \xi $ , which is often in the millions for most real-world KGs, and $r^{\prime }\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\prime },r^{\prime },o}$0 , where ${s^{\prime },r^{\prime },o}$1 is defined as before.
Challenges
There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\overline{\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\xi | \times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction.
Efficiently Identifying the Modification
In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications.
First-order Approximation of Influence
We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\langle s^{\prime }, r^{\prime }, o \rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\operatornamewithlimits{argmin} (G)$0 - $\operatornamewithlimits{argmin} (G)$1 -= $\operatornamewithlimits{argmin} (G)$2 Ho $\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\operatornamewithlimits{argmin} (G)$4 Ho $\operatornamewithlimits{argmin} (G)$5 dd $\operatornamewithlimits{argmin} (G)$6 d $\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\operatornamewithlimits{argmin} (G)$8 s, r, o $\operatornamewithlimits{argmin} (G)$9 s', r', o $\langle s^{\prime }, r^{\prime }, o \rangle $0 $\langle s^{\prime }, r^{\prime }, o \rangle $1 $\langle s^{\prime }, r^{\prime }, o \rangle $2
Continuous Optimization for Search
Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\langle s^{\prime }, r^{\prime }, o \rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\langle s^{\prime }, r^{\prime }, o \rangle $ by using a gradient-based algorithm on vector $_{s^{\prime },r^{\prime }}$ in the embedding space (reminder, $_{s^{\prime },r^{\prime }}=^{\prime },^{\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\prime }, r^{\prime }}$ , we still need to generate the pair $(s^{\prime },r^{\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\prime },r^{\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\prime },r^{\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\langle s^{\prime }, r^{\prime }, o \rangle $0 as one-hot inputs, and calculates $\langle s^{\prime }, r^{\prime }, o \rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\langle s^{\prime }, r^{\prime }, o \rangle $2 as input and produce $\langle s^{\prime }, r^{\prime }, o \rangle $3 and $\langle s^{\prime }, r^{\prime }, o \rangle $4 , essentially inverting $\langle s^{\prime }, r^{\prime }, o \rangle $5 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $6 s $\langle s^{\prime }, r^{\prime }, o \rangle $7 r $\langle s^{\prime }, r^{\prime }, o \rangle $8 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\prime },r^{\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\prime },r^{\prime }}$1 to $_{s^{\prime },r^{\prime }}$2 pairs.
Experiments
We evaluate by ( "Influence Function vs " ) comparing estimate with the actual effect of the attacks, ( "Robustness of Link Prediction Models" ) studying the effect of adversarial attacks on evaluation metrics, ( "Interpretability of Models" ) exploring its application to the interpretability of KG representations, and ( "Finding Errors in Knowledge Graphs" ) detecting incorrect triples.
Influence Function vs
To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\rho $ and Kendall's $\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them.
Robustness of Link Prediction Models
Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.
Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\langle s, r, o \rangle $ , we consider two baselines: 1) choosing a random fake fact $\langle s^{\prime }, r^{\prime }, o \rangle $ (Random Attack); 2) finding $(s^{\prime }, r^{\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score.
All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix "Sample Adversarial Attacks" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.
Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.
Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.
Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types.
Interpretability of Models
To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\wedge R_2(c,b)\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work.
Finding Errors in Knowledge Graphs
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph.
To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\langle s^{\prime }, r, o\rangle $ where $s^{\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ where $s^{\prime }$ and $r^{\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\%$ and $55\%$ in detecting errors.
Related Work
Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix "First-order Approximation of the Change For TransE" .
Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.
Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs.
Conclusions
Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage.
Acknowledgements
We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies.
Appendix
We approximate the change on the score of the target triple upon applying attacks other than the $\langle s^{\prime }, r^{\prime }, o \rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\langle s, r^{\prime }, o \rangle $ and $\langle s, r^{\prime }, o^{\prime } \rangle $ . Defining the scoring function as $\psi (s,r,o) = , ) \cdot = _{s,r} \cdot $ , we further assume that $\psi (s,r,o) =\cdot (, ) =\cdot _{r,o}$ .
Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle
Using similar argument as the attacks in the form of $\langle s^{\prime }, r^{\prime }, o \rangle $ , we can calculate the effect of the attack, $\overline{\psi }{(s,r,o)}-\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ .
We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s, r^{\prime }, o^{\prime } \rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\prime }, o^{\prime }} = (^{\prime },^{\prime })$ , and $\varphi = \sigma (\psi (s,r^{\prime },o^{\prime }))$ . At convergence, after retraining, we expect $\nabla _{e_s} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_s} (\overline{G})$ to get: 0 - (1-)r',o'+
(Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -=
(1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \varphi (1-\varphi ) _{r^{\prime },o^{\prime }}^\intercal _{r^{\prime },o^{\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) =
((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o.
Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle
In this section we approximate the effect of attack in the form of $\langle s, r^{\prime }, o \rangle $ . In contrast to $\langle s^{\prime }, r^{\prime }, o \rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -=
(1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -=
(1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o=
((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)=
s,r.(-) +(-).r,o =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+
((1-) (Hs + (1-) r',or',o)-1 r',o) r, o
First-order Approximation of the Change For TransE
In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\overline{\psi }{(s,r,o)}$ (where $\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s^{\prime }, r^{\prime }, o \rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\prime }, r^{\prime }} = ^{\prime }+ ^{\prime }$ , and $\varphi = \sigma (\psi (s^{\prime },r^{\prime },o))$ . At convergence, after retraining, we expect $\nabla _{e_o} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_o} (\overline{G})$ to get: 0
(1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-)
Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+
1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o)
+ Then, we compute the score change as: (s,r,o)= |+-|
= |++(1-) (Ho - Hs',r',o)-1
(s', r'-)(s',r',o) - |
Calculating this expression is efficient since $H_o$ is a $d\times d$ matrix.
Sample Adversarial Attacks
In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types. | Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs |
bc9c31b3ce8126d1d148b1025c66f270581fde10 | bc9c31b3ce8126d1d148b1025c66f270581fde10_1 | Q: What datasets are used to evaluate this approach?
Text: Introduction
Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .
Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\%$ and $50.7\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\%$ accuracy in detecting errors.
Background and Notation
In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\langle s, r, o\rangle $ , where $s,o\in \xi $ , the set of entities, and $r\in $ , the set of relations. To model the KG, a scoring function $\psi :\xi \times \times \xi \rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\psi (s,r,o) = , ) \cdot $ , where $,,\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \odot $ , where $\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\in \xi $0 .
We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\sigma (\psi (s,r,o))$ for $\langle s,r,o\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o)))
+ (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any. $
Completion Robustness and Interpretability via Adversarial Graph Edits ()
For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\langle s, r, o\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ i.e $s^{\prime }$ and $r^{\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\langle s, r^{\prime }, o^{\prime }\rangle $ and $\langle s, r^{\prime }, o\rangle $ in appendices "Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle " and "Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle " , and leave empirical evaluation of these modifications for future work.
Removing a fact ()
For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\prime },r^{\prime },o}\in G$ such that the score $\psi (s,r,o)$ when trained on $G$ and the score $\overline{\psi }(s,r,o)$ when trained on $G-\lbrace {s^{\prime },r^{\prime },o}\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ , and $\text{Nei}(o)=\lbrace (s^{\prime },r^{\prime })|\langle s^{\prime },r^{\prime },o \rangle \in G \rbrace $ .
Adding a new fact ()
We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\prime },r^{\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\psi (s,r,o)$ the most. Using $\overline{\psi }(s,r,o)$ as the score after training on $G\cup \lbrace {s^{\prime },r^{\prime },o}\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ . The search here is over any possible $s^{\prime }\in \xi $ , which is often in the millions for most real-world KGs, and $r^{\prime }\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\prime },r^{\prime },o}$0 , where ${s^{\prime },r^{\prime },o}$1 is defined as before.
Challenges
There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\overline{\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\xi | \times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction.
Efficiently Identifying the Modification
In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications.
First-order Approximation of Influence
We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\langle s^{\prime }, r^{\prime }, o \rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\operatornamewithlimits{argmin} (G)$0 - $\operatornamewithlimits{argmin} (G)$1 -= $\operatornamewithlimits{argmin} (G)$2 Ho $\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\operatornamewithlimits{argmin} (G)$4 Ho $\operatornamewithlimits{argmin} (G)$5 dd $\operatornamewithlimits{argmin} (G)$6 d $\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\operatornamewithlimits{argmin} (G)$8 s, r, o $\operatornamewithlimits{argmin} (G)$9 s', r', o $\langle s^{\prime }, r^{\prime }, o \rangle $0 $\langle s^{\prime }, r^{\prime }, o \rangle $1 $\langle s^{\prime }, r^{\prime }, o \rangle $2
Continuous Optimization for Search
Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\langle s^{\prime }, r^{\prime }, o \rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\langle s^{\prime }, r^{\prime }, o \rangle $ by using a gradient-based algorithm on vector $_{s^{\prime },r^{\prime }}$ in the embedding space (reminder, $_{s^{\prime },r^{\prime }}=^{\prime },^{\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\prime }, r^{\prime }}$ , we still need to generate the pair $(s^{\prime },r^{\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\prime },r^{\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\prime },r^{\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\langle s^{\prime }, r^{\prime }, o \rangle $0 as one-hot inputs, and calculates $\langle s^{\prime }, r^{\prime }, o \rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\langle s^{\prime }, r^{\prime }, o \rangle $2 as input and produce $\langle s^{\prime }, r^{\prime }, o \rangle $3 and $\langle s^{\prime }, r^{\prime }, o \rangle $4 , essentially inverting $\langle s^{\prime }, r^{\prime }, o \rangle $5 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $6 s $\langle s^{\prime }, r^{\prime }, o \rangle $7 r $\langle s^{\prime }, r^{\prime }, o \rangle $8 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\prime },r^{\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\prime },r^{\prime }}$1 to $_{s^{\prime },r^{\prime }}$2 pairs.
Experiments
We evaluate by ( "Influence Function vs " ) comparing estimate with the actual effect of the attacks, ( "Robustness of Link Prediction Models" ) studying the effect of adversarial attacks on evaluation metrics, ( "Interpretability of Models" ) exploring its application to the interpretability of KG representations, and ( "Finding Errors in Knowledge Graphs" ) detecting incorrect triples.
Influence Function vs
To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\rho $ and Kendall's $\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them.
Robustness of Link Prediction Models
Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.
Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\langle s, r, o \rangle $ , we consider two baselines: 1) choosing a random fake fact $\langle s^{\prime }, r^{\prime }, o \rangle $ (Random Attack); 2) finding $(s^{\prime }, r^{\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score.
All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix "Sample Adversarial Attacks" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.
Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.
Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.
Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types.
Interpretability of Models
To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\wedge R_2(c,b)\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work.
Finding Errors in Knowledge Graphs
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph.
To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\langle s^{\prime }, r, o\rangle $ where $s^{\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ where $s^{\prime }$ and $r^{\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\%$ and $55\%$ in detecting errors.
Related Work
Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix "First-order Approximation of the Change For TransE" .
Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.
Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs.
Conclusions
Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage.
Acknowledgements
We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies.
Appendix
We approximate the change on the score of the target triple upon applying attacks other than the $\langle s^{\prime }, r^{\prime }, o \rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\langle s, r^{\prime }, o \rangle $ and $\langle s, r^{\prime }, o^{\prime } \rangle $ . Defining the scoring function as $\psi (s,r,o) = , ) \cdot = _{s,r} \cdot $ , we further assume that $\psi (s,r,o) =\cdot (, ) =\cdot _{r,o}$ .
Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle
Using similar argument as the attacks in the form of $\langle s^{\prime }, r^{\prime }, o \rangle $ , we can calculate the effect of the attack, $\overline{\psi }{(s,r,o)}-\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ .
We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s, r^{\prime }, o^{\prime } \rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\prime }, o^{\prime }} = (^{\prime },^{\prime })$ , and $\varphi = \sigma (\psi (s,r^{\prime },o^{\prime }))$ . At convergence, after retraining, we expect $\nabla _{e_s} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_s} (\overline{G})$ to get: 0 - (1-)r',o'+
(Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -=
(1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \varphi (1-\varphi ) _{r^{\prime },o^{\prime }}^\intercal _{r^{\prime },o^{\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) =
((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o.
Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle
In this section we approximate the effect of attack in the form of $\langle s, r^{\prime }, o \rangle $ . In contrast to $\langle s^{\prime }, r^{\prime }, o \rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -=
(1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -=
(1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o=
((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)=
s,r.(-) +(-).r,o =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+
((1-) (Hs + (1-) r',or',o)-1 r',o) r, o
First-order Approximation of the Change For TransE
In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\overline{\psi }{(s,r,o)}$ (where $\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s^{\prime }, r^{\prime }, o \rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\prime }, r^{\prime }} = ^{\prime }+ ^{\prime }$ , and $\varphi = \sigma (\psi (s^{\prime },r^{\prime },o))$ . At convergence, after retraining, we expect $\nabla _{e_o} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_o} (\overline{G})$ to get: 0
(1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-)
Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+
1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o)
+ Then, we compute the score change as: (s,r,o)= |+-|
= |++(1-) (Ho - Hs',r',o)-1
(s', r'-)(s',r',o) - |
Calculating this expression is efficient since $H_o$ is a $d\times d$ matrix.
Sample Adversarial Attacks
In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types. | WN18 and YAGO3-10 |
185841e979373808d99dccdade5272af02b98774 | 185841e979373808d99dccdade5272af02b98774_0 | Q: How is this approach used to detect incorrect facts?
Text: Introduction
Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .
Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\%$ and $50.7\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\%$ accuracy in detecting errors.
Background and Notation
In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\langle s, r, o\rangle $ , where $s,o\in \xi $ , the set of entities, and $r\in $ , the set of relations. To model the KG, a scoring function $\psi :\xi \times \times \xi \rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\psi (s,r,o) = , ) \cdot $ , where $,,\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \odot $ , where $\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\in \xi $0 .
We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\sigma (\psi (s,r,o))$ for $\langle s,r,o\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o)))
+ (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any. $
Completion Robustness and Interpretability via Adversarial Graph Edits ()
For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\langle s, r, o\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ i.e $s^{\prime }$ and $r^{\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\langle s, r^{\prime }, o^{\prime }\rangle $ and $\langle s, r^{\prime }, o\rangle $ in appendices "Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle " and "Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle " , and leave empirical evaluation of these modifications for future work.
Removing a fact ()
For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\prime },r^{\prime },o}\in G$ such that the score $\psi (s,r,o)$ when trained on $G$ and the score $\overline{\psi }(s,r,o)$ when trained on $G-\lbrace {s^{\prime },r^{\prime },o}\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ , and $\text{Nei}(o)=\lbrace (s^{\prime },r^{\prime })|\langle s^{\prime },r^{\prime },o \rangle \in G \rbrace $ .
Adding a new fact ()
We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\prime },r^{\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\psi (s,r,o)$ the most. Using $\overline{\psi }(s,r,o)$ as the score after training on $G\cup \lbrace {s^{\prime },r^{\prime },o}\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ . The search here is over any possible $s^{\prime }\in \xi $ , which is often in the millions for most real-world KGs, and $r^{\prime }\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\prime },r^{\prime },o}$0 , where ${s^{\prime },r^{\prime },o}$1 is defined as before.
Challenges
There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\overline{\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\xi | \times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction.
Efficiently Identifying the Modification
In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications.
First-order Approximation of Influence
We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\langle s^{\prime }, r^{\prime }, o \rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\operatornamewithlimits{argmin} (G)$0 - $\operatornamewithlimits{argmin} (G)$1 -= $\operatornamewithlimits{argmin} (G)$2 Ho $\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\operatornamewithlimits{argmin} (G)$4 Ho $\operatornamewithlimits{argmin} (G)$5 dd $\operatornamewithlimits{argmin} (G)$6 d $\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\operatornamewithlimits{argmin} (G)$8 s, r, o $\operatornamewithlimits{argmin} (G)$9 s', r', o $\langle s^{\prime }, r^{\prime }, o \rangle $0 $\langle s^{\prime }, r^{\prime }, o \rangle $1 $\langle s^{\prime }, r^{\prime }, o \rangle $2
Continuous Optimization for Search
Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\langle s^{\prime }, r^{\prime }, o \rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\langle s^{\prime }, r^{\prime }, o \rangle $ by using a gradient-based algorithm on vector $_{s^{\prime },r^{\prime }}$ in the embedding space (reminder, $_{s^{\prime },r^{\prime }}=^{\prime },^{\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\prime }, r^{\prime }}$ , we still need to generate the pair $(s^{\prime },r^{\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\prime },r^{\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\prime },r^{\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\langle s^{\prime }, r^{\prime }, o \rangle $0 as one-hot inputs, and calculates $\langle s^{\prime }, r^{\prime }, o \rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\langle s^{\prime }, r^{\prime }, o \rangle $2 as input and produce $\langle s^{\prime }, r^{\prime }, o \rangle $3 and $\langle s^{\prime }, r^{\prime }, o \rangle $4 , essentially inverting $\langle s^{\prime }, r^{\prime }, o \rangle $5 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $6 s $\langle s^{\prime }, r^{\prime }, o \rangle $7 r $\langle s^{\prime }, r^{\prime }, o \rangle $8 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\prime },r^{\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\prime },r^{\prime }}$1 to $_{s^{\prime },r^{\prime }}$2 pairs.
Experiments
We evaluate by ( "Influence Function vs " ) comparing estimate with the actual effect of the attacks, ( "Robustness of Link Prediction Models" ) studying the effect of adversarial attacks on evaluation metrics, ( "Interpretability of Models" ) exploring its application to the interpretability of KG representations, and ( "Finding Errors in Knowledge Graphs" ) detecting incorrect triples.
Influence Function vs
To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\rho $ and Kendall's $\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them.
Robustness of Link Prediction Models
Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.
Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\langle s, r, o \rangle $ , we consider two baselines: 1) choosing a random fake fact $\langle s^{\prime }, r^{\prime }, o \rangle $ (Random Attack); 2) finding $(s^{\prime }, r^{\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score.
All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix "Sample Adversarial Attacks" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.
Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.
Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.
Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types.
Interpretability of Models
To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\wedge R_2(c,b)\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work.
Finding Errors in Knowledge Graphs
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph.
To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\langle s^{\prime }, r, o\rangle $ where $s^{\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ where $s^{\prime }$ and $r^{\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\%$ and $55\%$ in detecting errors.
Related Work
Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix "First-order Approximation of the Change For TransE" .
Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.
Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs.
Conclusions
Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage.
Acknowledgements
We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies.
Appendix
We approximate the change on the score of the target triple upon applying attacks other than the $\langle s^{\prime }, r^{\prime }, o \rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\langle s, r^{\prime }, o \rangle $ and $\langle s, r^{\prime }, o^{\prime } \rangle $ . Defining the scoring function as $\psi (s,r,o) = , ) \cdot = _{s,r} \cdot $ , we further assume that $\psi (s,r,o) =\cdot (, ) =\cdot _{r,o}$ .
Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle
Using similar argument as the attacks in the form of $\langle s^{\prime }, r^{\prime }, o \rangle $ , we can calculate the effect of the attack, $\overline{\psi }{(s,r,o)}-\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ .
We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s, r^{\prime }, o^{\prime } \rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\prime }, o^{\prime }} = (^{\prime },^{\prime })$ , and $\varphi = \sigma (\psi (s,r^{\prime },o^{\prime }))$ . At convergence, after retraining, we expect $\nabla _{e_s} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_s} (\overline{G})$ to get: 0 - (1-)r',o'+
(Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -=
(1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \varphi (1-\varphi ) _{r^{\prime },o^{\prime }}^\intercal _{r^{\prime },o^{\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) =
((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o.
Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle
In this section we approximate the effect of attack in the form of $\langle s, r^{\prime }, o \rangle $ . In contrast to $\langle s^{\prime }, r^{\prime }, o \rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -=
(1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -=
(1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o=
((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)=
s,r.(-) +(-).r,o =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+
((1-) (Hs + (1-) r',or',o)-1 r',o) r, o
First-order Approximation of the Change For TransE
In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\overline{\psi }{(s,r,o)}$ (where $\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s^{\prime }, r^{\prime }, o \rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\prime }, r^{\prime }} = ^{\prime }+ ^{\prime }$ , and $\varphi = \sigma (\psi (s^{\prime },r^{\prime },o))$ . At convergence, after retraining, we expect $\nabla _{e_o} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_o} (\overline{G})$ to get: 0
(1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-)
Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+
1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o)
+ Then, we compute the score change as: (s,r,o)= |+-|
= |++(1-) (Ho - Hs',r',o)-1
(s', r'-)(s',r',o) - |
Calculating this expression is efficient since $H_o$ is a $d\times d$ matrix.
Sample Adversarial Attacks
In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types. | if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. |
d427e3d41c4c9391192e249493be23926fc5d2e9 | d427e3d41c4c9391192e249493be23926fc5d2e9_0 | Q: Can this adversarial approach be used to directly improve model accuracy?
Text: Introduction
Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .
Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\%$ and $50.7\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\%$ accuracy in detecting errors.
Background and Notation
In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\langle s, r, o\rangle $ , where $s,o\in \xi $ , the set of entities, and $r\in $ , the set of relations. To model the KG, a scoring function $\psi :\xi \times \times \xi \rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\psi (s,r,o) = , ) \cdot $ , where $,,\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \odot $ , where $\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\in \xi $0 .
We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\sigma (\psi (s,r,o))$ for $\langle s,r,o\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o)))
+ (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any. $
Completion Robustness and Interpretability via Adversarial Graph Edits ()
For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\langle s, r, o\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ i.e $s^{\prime }$ and $r^{\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\langle s, r^{\prime }, o^{\prime }\rangle $ and $\langle s, r^{\prime }, o\rangle $ in appendices "Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle " and "Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle " , and leave empirical evaluation of these modifications for future work.
Removing a fact ()
For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\prime },r^{\prime },o}\in G$ such that the score $\psi (s,r,o)$ when trained on $G$ and the score $\overline{\psi }(s,r,o)$ when trained on $G-\lbrace {s^{\prime },r^{\prime },o}\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ , and $\text{Nei}(o)=\lbrace (s^{\prime },r^{\prime })|\langle s^{\prime },r^{\prime },o \rangle \in G \rbrace $ .
Adding a new fact ()
We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\prime },r^{\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\psi (s,r,o)$ the most. Using $\overline{\psi }(s,r,o)$ as the score after training on $G\cup \lbrace {s^{\prime },r^{\prime },o}\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ . The search here is over any possible $s^{\prime }\in \xi $ , which is often in the millions for most real-world KGs, and $r^{\prime }\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\prime },r^{\prime },o}$0 , where ${s^{\prime },r^{\prime },o}$1 is defined as before.
Challenges
There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\overline{\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\xi | \times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction.
Efficiently Identifying the Modification
In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications.
First-order Approximation of Influence
We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\langle s^{\prime }, r^{\prime }, o \rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\operatornamewithlimits{argmin} (G)$0 - $\operatornamewithlimits{argmin} (G)$1 -= $\operatornamewithlimits{argmin} (G)$2 Ho $\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\operatornamewithlimits{argmin} (G)$4 Ho $\operatornamewithlimits{argmin} (G)$5 dd $\operatornamewithlimits{argmin} (G)$6 d $\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\operatornamewithlimits{argmin} (G)$8 s, r, o $\operatornamewithlimits{argmin} (G)$9 s', r', o $\langle s^{\prime }, r^{\prime }, o \rangle $0 $\langle s^{\prime }, r^{\prime }, o \rangle $1 $\langle s^{\prime }, r^{\prime }, o \rangle $2
Continuous Optimization for Search
Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\langle s^{\prime }, r^{\prime }, o \rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\langle s^{\prime }, r^{\prime }, o \rangle $ by using a gradient-based algorithm on vector $_{s^{\prime },r^{\prime }}$ in the embedding space (reminder, $_{s^{\prime },r^{\prime }}=^{\prime },^{\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\prime }, r^{\prime }}$ , we still need to generate the pair $(s^{\prime },r^{\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\prime },r^{\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\prime },r^{\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\langle s^{\prime }, r^{\prime }, o \rangle $0 as one-hot inputs, and calculates $\langle s^{\prime }, r^{\prime }, o \rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\langle s^{\prime }, r^{\prime }, o \rangle $2 as input and produce $\langle s^{\prime }, r^{\prime }, o \rangle $3 and $\langle s^{\prime }, r^{\prime }, o \rangle $4 , essentially inverting $\langle s^{\prime }, r^{\prime }, o \rangle $5 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $6 s $\langle s^{\prime }, r^{\prime }, o \rangle $7 r $\langle s^{\prime }, r^{\prime }, o \rangle $8 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\prime },r^{\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\prime },r^{\prime }}$1 to $_{s^{\prime },r^{\prime }}$2 pairs.
Experiments
We evaluate by ( "Influence Function vs " ) comparing estimate with the actual effect of the attacks, ( "Robustness of Link Prediction Models" ) studying the effect of adversarial attacks on evaluation metrics, ( "Interpretability of Models" ) exploring its application to the interpretability of KG representations, and ( "Finding Errors in Knowledge Graphs" ) detecting incorrect triples.
Influence Function vs
To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\rho $ and Kendall's $\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them.
Robustness of Link Prediction Models
Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.
Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\langle s, r, o \rangle $ , we consider two baselines: 1) choosing a random fake fact $\langle s^{\prime }, r^{\prime }, o \rangle $ (Random Attack); 2) finding $(s^{\prime }, r^{\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score.
All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix "Sample Adversarial Attacks" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.
Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.
Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.
Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types.
Interpretability of Models
To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\wedge R_2(c,b)\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work.
Finding Errors in Knowledge Graphs
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph.
To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\langle s^{\prime }, r, o\rangle $ where $s^{\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ where $s^{\prime }$ and $r^{\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\%$ and $55\%$ in detecting errors.
Related Work
Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix "First-order Approximation of the Change For TransE" .
Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.
Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs.
Conclusions
Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage.
Acknowledgements
We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies.
Appendix
We approximate the change on the score of the target triple upon applying attacks other than the $\langle s^{\prime }, r^{\prime }, o \rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\langle s, r^{\prime }, o \rangle $ and $\langle s, r^{\prime }, o^{\prime } \rangle $ . Defining the scoring function as $\psi (s,r,o) = , ) \cdot = _{s,r} \cdot $ , we further assume that $\psi (s,r,o) =\cdot (, ) =\cdot _{r,o}$ .
Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle
Using similar argument as the attacks in the form of $\langle s^{\prime }, r^{\prime }, o \rangle $ , we can calculate the effect of the attack, $\overline{\psi }{(s,r,o)}-\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ .
We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s, r^{\prime }, o^{\prime } \rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\prime }, o^{\prime }} = (^{\prime },^{\prime })$ , and $\varphi = \sigma (\psi (s,r^{\prime },o^{\prime }))$ . At convergence, after retraining, we expect $\nabla _{e_s} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_s} (\overline{G})$ to get: 0 - (1-)r',o'+
(Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -=
(1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \varphi (1-\varphi ) _{r^{\prime },o^{\prime }}^\intercal _{r^{\prime },o^{\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) =
((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o.
Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle
In this section we approximate the effect of attack in the form of $\langle s, r^{\prime }, o \rangle $ . In contrast to $\langle s^{\prime }, r^{\prime }, o \rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -=
(1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -=
(1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o=
((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)=
s,r.(-) +(-).r,o =
s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+
((1-) (Hs + (1-) r',or',o)-1 r',o) r, o
First-order Approximation of the Change For TransE
In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\overline{\psi }{(s,r,o)}$ (where $\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s^{\prime }, r^{\prime }, o \rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\prime }, r^{\prime }} = ^{\prime }+ ^{\prime }$ , and $\varphi = \sigma (\psi (s^{\prime },r^{\prime },o))$ . At convergence, after retraining, we expect $\nabla _{e_o} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_o} (\overline{G})$ to get: 0
(1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-)
Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+
1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o)
+ Then, we compute the score change as: (s,r,o)= |+-|
= |++(1-) (Ho - Hs',r',o)-1
(s', r'-)(s',r',o) - |
Calculating this expression is efficient since $H_o$ is a $d\times d$ matrix.
Sample Adversarial Attacks
In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types. | Yes |
330f2cdeab689670b68583fc4125f5c0b26615a8 | 330f2cdeab689670b68583fc4125f5c0b26615a8_0 | Q: what are the advantages of the proposed model?
Text: Introduction
Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 .
Since documents are frequently associated with other variables such as labels, tags or ratings, much interest has been placed on supervised topic models BIBREF6 , which allow the use of that extra information to “guide" the topics discovery. By jointly learning the topics distributions and a classification or regression model, supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regression/classification algorithm BIBREF2 , BIBREF7 .
Supervised topics models are then state-of-the-art approaches for predicting target variables associated with complex high-dimensional data, such as documents or images. Unfortunately, the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the real-world applications that involve some form of human labeling. For instance, the popular Reuters-21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group, Inc. Similarly, the LabelMe project asks volunteers to annotate images from a large collection using an online tool. Hence, it is seldom the case where a single oracle labels an entire collection.
Furthermore, the Web, through its social nature, also exploits the wisdom of crowds to annotate large collections of documents and images. By categorizing texts, tagging images or rating products and places, Web users are generating large volumes of labeled content. However, when learning supervised models from crowds, the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability (or bias) BIBREF8 , BIBREF9 . If we consider a sentiment analysis task, it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators. Similarly, online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewer's opinions. It is therefore essential to account for these issues when learning from this increasingly common type of data. Hence, the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years (e.g. BIBREF10 , BIBREF11 ).
The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk (AMT) has further contributed to the recent advances in learning from crowds. This kind of platforms offers a fast, scalable and inexpensive solution for labeling large amounts of data. However, their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias. Hence, a careless use of crowdsourced data as training data risks generating flawed models.
In this article, we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases. The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics, the latent true target variables as a result of the empirical distribution over topics of the documents, and the labels of the multiple annotators as noisy versions of that latent ground truth. We propose two different models, one for classification BIBREF12 and another for regression problems, thus covering a very wide range of possible practical applications, as we empirically demonstrate. Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text, images and video, by developing a multi-annotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex high-dimensional data from crowds.
Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference, we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets. We empirically show, using both simulated and real multiple-annotator labels obtained from AMT for popular text and image collections, that the proposed models are able to outperform other state-of-the-art approaches in both classification and regression tasks. We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart.
Supervised topic models
Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression.
From a classification perspective, there are several ways in which document classes can be included in LDA. The most natural one in this setting is probably the sLDA approach, since the classes are directly dependent on the empirical topic mixture distributions. This approach is coherent with the generative perspective of LDA but, nevertheless, several discriminative alternatives also exist. For example, DiscLDA BIBREF14 introduces a class-dependent linear transformation on the topic mixture proportions of each document, such that the per-word topic assignments are drawn from linearly transformed mixture proportions. The class-specific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions. The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 .
An alternative way of including classes in LDA for supervision is the one proposed in the Labeled-LDA model BIBREF15 . Labeled-LDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set. While this allows for multiple labels per document, it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels.
From a regression perspective, other than sLDA, the most relevant approaches are the Dirichlet-multimonial regression BIBREF16 and the inverse regression topic models BIBREF17 . The Dirichlet-multimonial regression (DMR) topic model BIBREF16 includes a log-linear prior on the document's mixture proportions that is a function of a set of arbitrary features, such as author, date, publication venue or references in scientific articles. The inferred Dirichlet-multinomial distribution can then be used to make predictions about the values of theses features. The inverse regression topic model (IRTM) BIBREF17 is a mixed-membership extension of the multinomial inverse regression (MNIR) model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis. However, this results in a rather complex and inefficient inference procedure. Furthermore, making predictions in the IRTM is not trivial. For example, MAP estimates of targets will be in a different scale than the original document's metadata. Hence, the authors propose the use of a linear model to regress metadata values onto their MAP predictions.
The approaches discussed so far rely on likelihood-based estimation procedures. The work in BIBREF7 contrasts with these approaches by proposing MedLDA, a supervised topic model that utilizes the max-margin principle for estimation. Despite its margin-based advantages, MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions. On the contrary, in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics.
Learning from multiple annotators
Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier.
While earlier works usually focused on estimating the ground truth and the error rates of different annotators, recent works are more focused on the problem of learning classifiers using multiple-annotator data. This idea was explored by Raykar et al. BIBREF21 , who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier, by modeling the ground truth labels as latent variables. This work was later extended in BIBREF11 by considering the dependencies of the annotators' labels on the instances they are labeling, and also in BIBREF22 through the use of Gaussian process classifiers. The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables. However, it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators. Furthermore, it considers the problems of learning a low-dimensional representation of the input data (through topic modeling) and modeling the answers of multiple annotators jointly, providing an efficient stochastic variational inference algorithm.
Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators, for continuous response variables this problem has been approached in a much smaller extent. For example, Groot et al. BIBREF23 address this problem in the context of Gaussian processes. In their work, the authors assign a different variance to the likelihood of the data points provided by the different annotators, thereby allowing them to have different noise levels, which can be estimated by maximizing the marginal likelihood of the data. Similarly, the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators. In this article, we take this idea one step further by also considering a per-annotator bias parameter, which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common, for example, in product ratings and document reviews. Furthermore, we empirically validate the proposed model using real multi-annotator data obtained from Amazon Mechanical Turk. This contrasts with the previously mentioned works, which rely only on simulated annotators.
Classification model
In this section, we develop a multi-annotator supervised topic model for classification problems. The model for regression settings will be presented in Section SECREF5 . We start by deriving a (batch) variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters. We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents. Finally, we show how to use the learned model to classify new documents.
Proposed model
Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 , where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators. We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics, each being defined as a distribution over the words in a vocabulary, as in LDA. In LDA, the INLINEFORM5 word, INLINEFORM6 , in a document INLINEFORM7 is provided a discrete topic-assignment INLINEFORM8 , which is drawn from the documents' distribution over topics INLINEFORM9 . This allows us to build lower-dimensional representations of documents, which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topic-assignment of the words in the document, INLINEFORM11 , and applying a softmax function in order to obtain a distribution over classes. Alternatively, one could consider more flexible models such as Gaussian processes, however that would considerably increase the complexity of inference.
Unfortunately, a direct mapping between document classes and the labels provided by the different annotators in a multiple-annotator setting would correspond to assuming that they are all equally reliable, an assumption that is violated in practice, as previous works clearly demonstrate (e.g. BIBREF8 , BIBREF9 ). Hence, we assume the existence of a latent ground truth class, and model the labels from the different annotators using a noise model that states that, given a true class INLINEFORM0 , each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 . Hence, by modeling the matrix INLINEFORM4 we are in fact modeling a per-annotator (normalized) confusion matrix, which allows us to account for their different levels of expertise and correct their potential biases.
The generative process of the proposed model for classification problems can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) class INLINEFORM0
For each annotator INLINEFORM0
Draw annotator's label INLINEFORM0
where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document, INLINEFORM2 , and the softmax is given by DISPLAYFORM0
Fig. FIGREF20 shows a graphical model representation of the proposed model, where INLINEFORM0 denotes the number of topics, INLINEFORM1 is the number of classes, INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 . Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters. Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity. Similarly, instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 , we place a Dirichlet prior over these variables and perform approximate Bayesian inference. This contrasts with previous works on learning classification models from crowds BIBREF21 , BIBREF24 .
For developing a multi-annotator supervised topic model for regression, we shall follow a similar intuition as the one we considered for classification. Namely, we shall assume that, for a given document INLINEFORM0 , each annotator provides a noisy version, INLINEFORM1 , of the true (continuous) target variable, which we denote by INLINEFORM2 . This can be, for example, the true rating of a product or the true sentiment of a document. Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 (inverse variance), and assuming a Gaussian noise model for the annotators' answers, we have that DISPLAYFORM0
This approach is therefore more powerful than previous works BIBREF21 , BIBREF23 , where a single precision parameter was used to model the annotators' expertise. Fig. FIGREF45 illustrates this intuition for 4 annotators, represented by different colors. The “green annotator" is the best one, since he is right on the target and his answers vary very little (low bias, high precision). The “yellow annotator" has a low bias, but his answers are very uncertain, as they can vary a lot. Contrarily, the “blue annotator" is very precise, but consistently over-estimates the true target (high bias, high precision). Finally, the “red annotator" corresponds to the worst kind of annotator (with high bias and low precision).
Having specified a model for annotators answers given the true targets, the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 . For this, we shall keep things simple and assume a linear model as in sLDA BIBREF6 . The generative process of the proposed model for continuous target variables can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) target INLINEFORM0
For each annotator INLINEFORM0
Draw answer INLINEFORM0
Fig. FIGREF60 shows a graphical representation of the proposed model.
Approximate inference
Given a dataset INLINEFORM0 , the goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM1 , the per-word topic assignments INLINEFORM2 , the per-topic distribution over words INLINEFORM3 , the per-document latent true class INLINEFORM4 , and the per-annotator confusion parameters INLINEFORM5 . As with LDA, computing the exact posterior distribution of the latent variables is computationally intractable. Hence, we employ mean-field variational inference to perform approximate Bayesian inference.
Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution. We assume a fully-factorized (mean-field) variational distribution of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are variational parameters. Table TABREF23 shows the correspondence between variational parameters and the original parameters.
Let INLINEFORM0 denote the model parameters. Following BIBREF25 , the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0
which we maximize using coordinate ascent.
Optimizing INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0
The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator, INLINEFORM2 , and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions, respectively. Taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 and setting them to zero, yields the following update DISPLAYFORM0
Similarly, the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 . Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives w.r.t. INLINEFORM2 to zero gives the following update DISPLAYFORM0
Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators “weighted" by their expected log probability INLINEFORM3 .
The main difficulty of applying standard variational inference methods to the proposed model is the non-conjugacy between the distribution of the mean topic-assignment INLINEFORM0 and the softmax. Namely, in the expectation DISPLAYFORM0
the second term is intractable to compute. We can make progress by applying Jensen's inequality to bound it as follows DISPLAYFORM0
where INLINEFORM0 , which is constant w.r.t. INLINEFORM1 . This local variational bound can be made tight by noticing that INLINEFORM2 , where equality holds if and only if INLINEFORM3 . Hence, given the current parameter estimates INLINEFORM4 , if we set INLINEFORM5 and INLINEFORM6 then, for an individual parameter INLINEFORM7 , we have that DISPLAYFORM0
Using this local bound to approximate the expectation of the log-sum-exp term, and taking derivatives of the evidence lower bound w.r.t. INLINEFORM0 with the constraint that INLINEFORM1 , yields the following fix-point update DISPLAYFORM0
where INLINEFORM0 denotes the size of the vocabulary. Notice how the per-word variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 .
The variational inference algorithm iterates between Eqs. EQREF25 - EQREF33 until the evidence lower bound, Eq. EQREF24 , converges. Additional details are provided as supplementary material.
The goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM0 , the per-word topic assignments INLINEFORM1 , the per-topic distribution over words INLINEFORM2 and the per-document latent true targets INLINEFORM3 . As we did for the classification model, we shall develop a variational inference algorithm using coordinate ascent. The lower-bound on the log marginal likelihood is now given by DISPLAYFORM0
where INLINEFORM0 are the model parameters. We assume a fully-factorized (mean-field) variational distribution INLINEFORM1 of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are the variational parameters. Notice the new Gaussian term, INLINEFORM5 , corresponding to the approximate posterior distribution of the unobserved true targets.
Optimizing the variational objective INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs. EQREF25 and . Optimizing w.r.t. INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0
where we defined INLINEFORM0 . Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution, which is given by INLINEFORM1 .
The only variables left for doing inference on are then the latent true targets INLINEFORM0 . The variational distribution of INLINEFORM1 is governed by two parameters: a mean INLINEFORM2 and a variance INLINEFORM3 . Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0
Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0
Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be, and the bias-corrected answers of the different annotators weighted by their individual precisions.
As for INLINEFORM0 , we can optimize INLINEFORM1 w.r.t. INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0
and taking derivatives, yielding the update DISPLAYFORM0
Parameter estimation
The model parameters are INLINEFORM0 . The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyper-parameters of the proposed model. As with many works on topic models (e.g. BIBREF26 , BIBREF2 ), we assume hyper-parameters to be fixed, since they can be effectively selected by grid-search procedures which are able to explore well the parameter space without suffering from local optima. Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm. Therefore, in the E-step we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables, and in the M-step we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 . Unfortunately, taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 does not yield a closed-form solution. Hence, we use a numerical method, namely L-BFGS BIBREF27 , to find an optimum. The objective function and gradients are given by DISPLAYFORM0
where, for convenience, we defined the following variable: INLINEFORM0 .
The parameters of the proposed regression model are INLINEFORM0 . As we did for the classification model, we shall assume the Dirichlet parameters, INLINEFORM1 and INLINEFORM2 , to be fixed. Similarly, we shall assume that the variance of the true targets, INLINEFORM3 , to be constant. The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases, INLINEFORM5 , and precisions, INLINEFORM6 , which we estimate using variational Bayesian EM.
Since the latent true targets are now linear functions of the documents' empirical topic mixtures (i.e. there is no softmax function), we can find a closed form solution for the regression coefficients INLINEFORM0 . Taking derivatives of INLINEFORM1 w.r.t. INLINEFORM2 and setting them to zero, gives the following solution for INLINEFORM3 DISPLAYFORM0
where DISPLAYFORM0
We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood. The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0
Taking derivatives w.r.t. INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0
Similarly, we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
The maximum likelihood estimate for the precision (inverse variance) of the INLINEFORM0 annotator is then given by DISPLAYFORM0
Given a set of fitted parameters, it is then straightforward to make predictions for new documents: it is just necessary to infer the (approximate) posterior distribution over the word-topic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA (Eqs. EQREF25 and EQREF42 ), and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 .
Stochastic variational inference
In Section SECREF21 , we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model. This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure, and estimating the global hidden variables. However, this can be inefficient for large datasets, since it requires a full pass through the data at each iteration before updating the global variables. In this section, we develop a stochastic variational inference algorithm BIBREF13 , which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 .
Based on the theory of stochastic optimization BIBREF28 , we can find unbiased estimates of the gradients by subsampling a document (or a mini-batch of documents) from the corpus, and using it to compute the gradients as if that document was observed INLINEFORM0 times. Hence, given an uniformly sampled document INLINEFORM1 , we use the current posterior distributions of the global latent variables, INLINEFORM2 and INLINEFORM3 , and the current coefficient estimates INLINEFORM4 , to compute the posterior distribution over the local hidden variables INLINEFORM5 , INLINEFORM6 and INLINEFORM7 using Eqs. EQREF25 , EQREF33 and EQREF29 respectively. These posteriors are then used to update the global variational parameters, INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients.
Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model. Given an appropriate schedule for the learning rates INLINEFORM0 , such that INLINEFORM1 and INLINEFORM2 , the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 .
[t] Stochastic variational inference for the proposed classification model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 Set t = t + 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq. EQREF33 , for INLINEFORM8 Compute INLINEFORM9 using Eq. EQREF25 Compute INLINEFORM10 using Eq. EQREF29 local parameters INLINEFORM11 , INLINEFORM12 and INLINEFORM13 converge Compute step-size INLINEFORM14 Update topics variational parameters DISPLAYFORM0
Update annotators confusion parameters DISPLAYFORM0
global convergence criterion is met
As we did for the classification model from Section SECREF4 , we can envision developing a stochastic variational inference for the proposed regression model. In this case, the only “global" latent variables are the per-topic distributions over words INLINEFORM0 . As for the “local" latent variables, instead of a single variable INLINEFORM1 , we now have two variables per-document: INLINEFORM2 and INLINEFORM3 . The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 . For added efficiency, one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 , by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the step-size INLINEFORM6 .
[t] Stochastic variational inference for the proposed regression model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 Set t = t + 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq. EQREF64 , for INLINEFORM9 Compute INLINEFORM10 using Eq. EQREF25 Compute INLINEFORM11 using Eq. EQREF66 Compute INLINEFORM12 using Eq. EQREF68 local parameters INLINEFORM13 , INLINEFORM14 and INLINEFORM15 converge Compute step-size INLINEFORM16 Update topics variational parameters DISPLAYFORM0
global convergence criterion is met
Document classification
In order to make predictions for a new (unlabeled) document INLINEFORM0 , we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 . This can be achieved by dropping the terms that involve INLINEFORM3 , INLINEFORM4 and INLINEFORM5 from the model's joint distribution (since, at prediction time, the multi-annotator labels are no longer observed) and averaging over the estimated topics distributions. Letting the topics distribution over words inferred during training be INLINEFORM6 , the joint distribution for a single document is now simply given by DISPLAYFORM0
Deriving a mean-field variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixed-point updates as in LDA BIBREF0 for INLINEFORM1 (Eq. EQREF25 ) and INLINEFORM2 DISPLAYFORM0
Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training, we can make predictions as follows DISPLAYFORM0
This is equivalent to making predictions in the classification version of sLDA BIBREF2 .
Regression model
In this section, we develop a variant of the model proposed in Section SECREF4 for regression problems. We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables. Next, we present a variational inference algorithm, highlighting the differences to the classification version. Finally, we show how to optimize the model parameters.
Experiments
In this section, the proposed multi-annotator supervised LDA models for classification and regression (MA-sLDAc and MA-sLDAr, respectively) are validated using both simulated annotators on popular corpora and using real multiple-annotator labels obtained from Amazon Mechanical Turk. Namely, we shall consider the following real-world problems: classifying posts and news stories; classifying images according to their content; predicting number of stars that a given user gave to a restaurant based on the review; predicting movie ratings using the text of the reviews.
Classification
In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers", “science", “politics" and “recreative". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
The different annotators were simulated by sampling their answers from a multinomial distribution, where the parameters are given by the lines of the annotators' confusion matrices. Hence, for each annotator INLINEFORM0 , we start by pre-defining a confusion matrix INLINEFORM1 with elements INLINEFORM2 , which correspond to the probability that the annotators' answer is INLINEFORM3 given that the true label is INLINEFORM4 , INLINEFORM5 . Then, the answers are sampled i.i.d. from INLINEFORM6 . This procedure was used to simulate 5 different annotators with the following accuracies: 0.737, 0.468, 0.284, 0.278, 0.260. In this experiment, no repeated labelling was used. Hence, each annotator only labels roughly one-fifth of the data. When compared to the ground truth, the simulated answers revealed an accuracy of 0.405. See Table TABREF81 for an overview of the details of the classification datasets used.
Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:
[itemsep=0.02cm]
LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .
LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.
LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.
Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).
sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.
For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.
In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .
The Reuters-21578 is a collection of manually categorized newswire stories with labels such as Acquisitions, Crude-oil, Earnings or Grain. For this experiment, only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label. This resulted in a total of 7016 documents distributed among 8 classes. Of these, 1800 documents were submitted to AMT for multiple annotators to label, giving an average of approximately 3 answers per document (see Table TABREF81 for further details). The remaining 5216 documents were used for testing. The collected answers yield an average worker accuracy of 56.8%. Applying majority voting to these answers reveals a ground truth accuracy of 71.0%. Fig. FIGREF90 shows the boxplots of the number of answers per worker and their accuracies. Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers.
The results obtained by the different approaches are given in Fig. FIGREF91 , where it can be seen that the proposed model (MA-sLDAc) outperforms all the other approaches. For this dataset, the svi algorithm is using mini-batches of 300 documents.
The proposed model was also validated using a dataset from the computer vision domain: LabelMe BIBREF31 . In contrast to the Reuters and Newsgroups corpora, LabelMe is an open online tool to annotate images. Hence, this experiment allows us to see how the proposed model generalizes beyond non-textual data. Using the Matlab interface provided in the projects' website, we extracted a subset of the LabelMe data, consisting of all the 256 x 256 images with the categories: “highway", “inside city", “tall building", “street", “forest", “coast", “mountain" or “open country". This allowed us to collect a total of 2688 labeled images. Of these, 1000 images were given to AMT workers to classify with one of the classes above. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. When majority voting is applied to the collected answers, a ground truth accuracy of 76.9% is obtained. Fig. FIGREF92 shows the boxplots of the number of answers per worker and their accuracies. Interestingly, the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters-21578 data (see Fig. FIGREF90 ), which suggests that this is an easier task for the AMT workers.
The preprocessing of the images used is similar to the approach in BIBREF1 . It uses 128-dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel. This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels. The 128-dimensional SIFT descriptors produced by the sliding window are then fed to a k-means algorithm (with k=200) in order construct a vocabulary of 200 “visual words". This allows us to represent the images with a bag of visual words model.
With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:
Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).
The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.
Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.
In order to verify that the proposed model was estimating the (normalized) confusion matrices INLINEFORM0 of the different workers correctly, a random sample of them was plotted against the true confusion matrices (i.e. the normalized confusion matrices evaluated against the true labels). Figure FIGREF95 shows the results obtained with 60 topics on the Reuters-21578 dataset, where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 (the supplementary material provides a similar figure for the LabelMe dataset). Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make, with several workers showing clear biases (e.g. workers 3 and 4), while others made mistakes more randomly (e.g. worker 1). Nevertheless, the proposed is able to capture these patterns correctly and account for effect.
To gain further insights, Table TABREF96 shows 4 example images from the LabelMe dataset, along with their true labels, the answers provided by the different workers, the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator ( INLINEFORM0 for INLINEFORM1 ) using a color-coding scheme similar to Fig. FIGREF95 . In the first example, although majority voting suggests “inside city" to be the correct label, we can see that the model has learned that annotators 32 and 43 are very likely to provide the label “inside city" when the true label is actually “street", and it is able to leverage that fact to infer that the correct label is “street". Similarly, in the second image the model is able to infer the correct true label from 3 conflicting labels. However, in the third image the model is not able to recover the correct true class, which can be explained by it not having enough evidence about the annotators and their reliabilities and biases (likelihood distribution for these cases is uniform). In fact, this raises interesting questions regarding requirements for the minimum number of labels per annotator, their reliabilities and their coherence. Finally, for the fourth image, somehow surprisingly, the model is able to infer the correct true class, even though all 3 annotators labeled it as “inside city".
Regression
As for proposed classification model, we start by validating MA-sLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict. For this purpose, we shall consider a dataset of user-submitted restaurant reviews from the website we8there.com. This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews. For each review, there is a five-star rating on four specific aspects of quality (food, service, value, and atmosphere) as well as the overall experience. Our goal is then to predict the overall experience of the user based on his comments in the review. We apply the same preprocessing as in BIBREF18 , which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews. The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
As with the classification model, we seek to simulate an heterogeneous set of annotators in terms of reliability and bias. Hence, in order to simulate an annotator INLINEFORM0 , we proceed as follows: let INLINEFORM1 be the true review of the restaurant; we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers, depending on what type of annotator we wish to simulate (see Fig. FIGREF45 ); we then sample a simulated answer as INLINEFORM4 . Using this procedure, we simulated 5 annotators with the following (bias, precision) pairs: (0.1, 10), (-0.3, 3), (-2.5, 10), (0.1, 0.5) and (1, 0.25). The goal is to have 2 good annotators (low bias, high precision), 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased. The coefficients of determination ( INLINEFORM5 ) of the simulated annotators are: [0.940, 0.785, -2.469, -0.131, -1.749]. Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0.798. Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments.
We compare the proposed model (MA-sLDAr) with the two following baselines:
[itemsep=0.02cm]
LDA + LinReg (mean): This baseline corresponds to applying unsupervised LDA to the data, and learning a linear regression model on the inferred topics distributions of the documents. The answers from the different annotators were aggregated by computing the mean.
sLDA (mean): This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators' answers.
Fig. FIGREF102 shows the results obtained for different numbers of topics. Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm, we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation. Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference (svi) algorithm, we only made experiments using the batch algorithm developed in Section SECREF61 . The results obtained clearly show the improved performance of MA-sLDAr over the other methods.
The proposed multi-annotator regression model (MA-sLDAr) was also validated with real annotators by using AMT. For that purpose, the movie review dataset from BIBREF35 was used. This dataset consists of 5006 movie reviews along with their respective star rating (from 1 to 10). The goal of this experiment is then predict how much a person liked a movie based on what she says about it. We ask workers to guess how much they think the writer of the review liked the movie based on her comments. An average of 4.96 answers per-review was collected for a total of 1500 reviews. The remaining reviews were used for testing. In average, each worker rated approximately 55 reviews. Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0.830. Table TABREF99 gives an overview of the statistics of this data. Fig. FIGREF104 shows boxplots of the number of answers per worker, as well as boxplots of their respective biases ( INLINEFORM1 ) and variances (inverse precisions, INLINEFORM2 ).
The preprocessing of the text consisted of stemming and stop-words removal. Using the preprocessed data, the proposed MA-sLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 . Fig. FIGREF105 shows the results obtained for different numbers of topics. These results show that the proposed model outperforms all the other baselines.
With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly, we plotted the true values against the estimates of MA-sLDAr with 60 topics for a random subset of 10 workers. Fig. FIGREF106 shows the obtained results, where higher color intensities indicate higher values. Ideally, the colour of two horizontally-adjacent squares would then be of similar shades, and this is indeed what happens in practice for the majority of the workers, as Fig. FIGREF106 shows. Interestingly, the figure also shows that there are a couple of workers that are considerably biased (e.g. workers 6 and 8) and that those biases are being correctly estimated, thus justifying the inclusion of a bias parameter in the proposed model, which contrasts with previous works BIBREF21 , BIBREF23 .
Conclusion
This article proposed a supervised topic model that is able to learn from multiple annotators and crowds, by accounting for their biases and different levels of expertise. Given the large sizes of modern datasets, and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates, generally involve complex high-dimensional data such as text and images, the proposed model constitutes a strong contribution for the multi-annotator paradigm. This model is then capable of jointly modeling the words in documents as arising from a mixture of topics, as well as the latent true target variables and the (noisy) answers of the multiple annotators. We developed two distinct models, one for classification and another for regression, which share similar intuitions but that inevitably differ due to the nature of the target variables. We empirically showed, using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform state-of-the-art approaches in several real-world problems, such as classifying posts, news stories and images, or predicting the number of stars of restaurant and the rating of movie based on their reviews. For this, we use various popular datasets from the state-of-the-art, that are commonly used for benchmarking machine learning algorithms. Finally, an efficient stochastic variational inference algorithm was described, which gives the proposed models the ability to scale to large datasets.
Acknowledgment
The Fundação para a Ciência e Tecnologia (FCT) is gratefully acknowledged for founding this work with the grants SFRH/BD/78396/2011 and PTDC/ECM-TRA/1898/2012 (InfoCROWDS).
[]Mariana Lourenço has a MSc degree in Informatics Engineering from University of Coimbra, Portugal. Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city. Her main research interests are machine learning, pattern recognition and natural language processing.
[]Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department, University of Coimbra in Portugal, from where she received a D.Sc. in Informatics Engineering, a Ph.D. in Electrical Engineering, speciality of Informatics, and a MSc in Computer Science. Her research interests are in the areas of Machine Learning, Pattern Recognition and Signal Processing and their applications to a broad range of fields. She was responsible/participated in several research projects in a wide range of application areas such as Text Classification, Financial, Biomedical and Bioinformatics. Bernardete Ribeiro is IEEE Senior Member, and member of IARP International Association of Pattern Recognition and ACM.
[]Francisco C. Pereira is Full Professor at the Technical University of Denmark (DTU), where he leads the Smart Mobility research group. His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior, and modeling and optimizing the transportation system as a whole. He has Master€™s (2000) and Ph.D. (2005) degrees in Computer Science from University of Coimbra, and has authored/co-authored over 70 journal and conference papers in areas such as pattern recognition, transportation, knowledge based systems and cognitive science. Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra. He was awarded several prestigious prizes, including an IEEE Achievements award, in 2009, the Singapore GYSS Challenge in 2013, and the Pyke Johnson award from Transportation Research Board, in 2015. | he proposed model outperforms all the baselines, being the svi version the one that performs best., the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm. |
c87b2dd5c439d5e68841a705dd81323ec0d64c97 | c87b2dd5c439d5e68841a705dd81323ec0d64c97_0 | Q: what are the state of the art approaches?
Text: Introduction
Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 .
Since documents are frequently associated with other variables such as labels, tags or ratings, much interest has been placed on supervised topic models BIBREF6 , which allow the use of that extra information to “guide" the topics discovery. By jointly learning the topics distributions and a classification or regression model, supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regression/classification algorithm BIBREF2 , BIBREF7 .
Supervised topics models are then state-of-the-art approaches for predicting target variables associated with complex high-dimensional data, such as documents or images. Unfortunately, the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the real-world applications that involve some form of human labeling. For instance, the popular Reuters-21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group, Inc. Similarly, the LabelMe project asks volunteers to annotate images from a large collection using an online tool. Hence, it is seldom the case where a single oracle labels an entire collection.
Furthermore, the Web, through its social nature, also exploits the wisdom of crowds to annotate large collections of documents and images. By categorizing texts, tagging images or rating products and places, Web users are generating large volumes of labeled content. However, when learning supervised models from crowds, the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability (or bias) BIBREF8 , BIBREF9 . If we consider a sentiment analysis task, it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators. Similarly, online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewer's opinions. It is therefore essential to account for these issues when learning from this increasingly common type of data. Hence, the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years (e.g. BIBREF10 , BIBREF11 ).
The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk (AMT) has further contributed to the recent advances in learning from crowds. This kind of platforms offers a fast, scalable and inexpensive solution for labeling large amounts of data. However, their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias. Hence, a careless use of crowdsourced data as training data risks generating flawed models.
In this article, we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases. The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics, the latent true target variables as a result of the empirical distribution over topics of the documents, and the labels of the multiple annotators as noisy versions of that latent ground truth. We propose two different models, one for classification BIBREF12 and another for regression problems, thus covering a very wide range of possible practical applications, as we empirically demonstrate. Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text, images and video, by developing a multi-annotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex high-dimensional data from crowds.
Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference, we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets. We empirically show, using both simulated and real multiple-annotator labels obtained from AMT for popular text and image collections, that the proposed models are able to outperform other state-of-the-art approaches in both classification and regression tasks. We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart.
Supervised topic models
Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression.
From a classification perspective, there are several ways in which document classes can be included in LDA. The most natural one in this setting is probably the sLDA approach, since the classes are directly dependent on the empirical topic mixture distributions. This approach is coherent with the generative perspective of LDA but, nevertheless, several discriminative alternatives also exist. For example, DiscLDA BIBREF14 introduces a class-dependent linear transformation on the topic mixture proportions of each document, such that the per-word topic assignments are drawn from linearly transformed mixture proportions. The class-specific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions. The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 .
An alternative way of including classes in LDA for supervision is the one proposed in the Labeled-LDA model BIBREF15 . Labeled-LDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set. While this allows for multiple labels per document, it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels.
From a regression perspective, other than sLDA, the most relevant approaches are the Dirichlet-multimonial regression BIBREF16 and the inverse regression topic models BIBREF17 . The Dirichlet-multimonial regression (DMR) topic model BIBREF16 includes a log-linear prior on the document's mixture proportions that is a function of a set of arbitrary features, such as author, date, publication venue or references in scientific articles. The inferred Dirichlet-multinomial distribution can then be used to make predictions about the values of theses features. The inverse regression topic model (IRTM) BIBREF17 is a mixed-membership extension of the multinomial inverse regression (MNIR) model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis. However, this results in a rather complex and inefficient inference procedure. Furthermore, making predictions in the IRTM is not trivial. For example, MAP estimates of targets will be in a different scale than the original document's metadata. Hence, the authors propose the use of a linear model to regress metadata values onto their MAP predictions.
The approaches discussed so far rely on likelihood-based estimation procedures. The work in BIBREF7 contrasts with these approaches by proposing MedLDA, a supervised topic model that utilizes the max-margin principle for estimation. Despite its margin-based advantages, MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions. On the contrary, in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics.
Learning from multiple annotators
Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier.
While earlier works usually focused on estimating the ground truth and the error rates of different annotators, recent works are more focused on the problem of learning classifiers using multiple-annotator data. This idea was explored by Raykar et al. BIBREF21 , who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier, by modeling the ground truth labels as latent variables. This work was later extended in BIBREF11 by considering the dependencies of the annotators' labels on the instances they are labeling, and also in BIBREF22 through the use of Gaussian process classifiers. The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables. However, it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators. Furthermore, it considers the problems of learning a low-dimensional representation of the input data (through topic modeling) and modeling the answers of multiple annotators jointly, providing an efficient stochastic variational inference algorithm.
Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators, for continuous response variables this problem has been approached in a much smaller extent. For example, Groot et al. BIBREF23 address this problem in the context of Gaussian processes. In their work, the authors assign a different variance to the likelihood of the data points provided by the different annotators, thereby allowing them to have different noise levels, which can be estimated by maximizing the marginal likelihood of the data. Similarly, the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators. In this article, we take this idea one step further by also considering a per-annotator bias parameter, which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common, for example, in product ratings and document reviews. Furthermore, we empirically validate the proposed model using real multi-annotator data obtained from Amazon Mechanical Turk. This contrasts with the previously mentioned works, which rely only on simulated annotators.
Classification model
In this section, we develop a multi-annotator supervised topic model for classification problems. The model for regression settings will be presented in Section SECREF5 . We start by deriving a (batch) variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters. We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents. Finally, we show how to use the learned model to classify new documents.
Proposed model
Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 , where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators. We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics, each being defined as a distribution over the words in a vocabulary, as in LDA. In LDA, the INLINEFORM5 word, INLINEFORM6 , in a document INLINEFORM7 is provided a discrete topic-assignment INLINEFORM8 , which is drawn from the documents' distribution over topics INLINEFORM9 . This allows us to build lower-dimensional representations of documents, which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topic-assignment of the words in the document, INLINEFORM11 , and applying a softmax function in order to obtain a distribution over classes. Alternatively, one could consider more flexible models such as Gaussian processes, however that would considerably increase the complexity of inference.
Unfortunately, a direct mapping between document classes and the labels provided by the different annotators in a multiple-annotator setting would correspond to assuming that they are all equally reliable, an assumption that is violated in practice, as previous works clearly demonstrate (e.g. BIBREF8 , BIBREF9 ). Hence, we assume the existence of a latent ground truth class, and model the labels from the different annotators using a noise model that states that, given a true class INLINEFORM0 , each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 . Hence, by modeling the matrix INLINEFORM4 we are in fact modeling a per-annotator (normalized) confusion matrix, which allows us to account for their different levels of expertise and correct their potential biases.
The generative process of the proposed model for classification problems can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) class INLINEFORM0
For each annotator INLINEFORM0
Draw annotator's label INLINEFORM0
where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document, INLINEFORM2 , and the softmax is given by DISPLAYFORM0
Fig. FIGREF20 shows a graphical model representation of the proposed model, where INLINEFORM0 denotes the number of topics, INLINEFORM1 is the number of classes, INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 . Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters. Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity. Similarly, instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 , we place a Dirichlet prior over these variables and perform approximate Bayesian inference. This contrasts with previous works on learning classification models from crowds BIBREF21 , BIBREF24 .
For developing a multi-annotator supervised topic model for regression, we shall follow a similar intuition as the one we considered for classification. Namely, we shall assume that, for a given document INLINEFORM0 , each annotator provides a noisy version, INLINEFORM1 , of the true (continuous) target variable, which we denote by INLINEFORM2 . This can be, for example, the true rating of a product or the true sentiment of a document. Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 (inverse variance), and assuming a Gaussian noise model for the annotators' answers, we have that DISPLAYFORM0
This approach is therefore more powerful than previous works BIBREF21 , BIBREF23 , where a single precision parameter was used to model the annotators' expertise. Fig. FIGREF45 illustrates this intuition for 4 annotators, represented by different colors. The “green annotator" is the best one, since he is right on the target and his answers vary very little (low bias, high precision). The “yellow annotator" has a low bias, but his answers are very uncertain, as they can vary a lot. Contrarily, the “blue annotator" is very precise, but consistently over-estimates the true target (high bias, high precision). Finally, the “red annotator" corresponds to the worst kind of annotator (with high bias and low precision).
Having specified a model for annotators answers given the true targets, the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 . For this, we shall keep things simple and assume a linear model as in sLDA BIBREF6 . The generative process of the proposed model for continuous target variables can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) target INLINEFORM0
For each annotator INLINEFORM0
Draw answer INLINEFORM0
Fig. FIGREF60 shows a graphical representation of the proposed model.
Approximate inference
Given a dataset INLINEFORM0 , the goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM1 , the per-word topic assignments INLINEFORM2 , the per-topic distribution over words INLINEFORM3 , the per-document latent true class INLINEFORM4 , and the per-annotator confusion parameters INLINEFORM5 . As with LDA, computing the exact posterior distribution of the latent variables is computationally intractable. Hence, we employ mean-field variational inference to perform approximate Bayesian inference.
Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution. We assume a fully-factorized (mean-field) variational distribution of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are variational parameters. Table TABREF23 shows the correspondence between variational parameters and the original parameters.
Let INLINEFORM0 denote the model parameters. Following BIBREF25 , the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0
which we maximize using coordinate ascent.
Optimizing INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0
The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator, INLINEFORM2 , and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions, respectively. Taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 and setting them to zero, yields the following update DISPLAYFORM0
Similarly, the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 . Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives w.r.t. INLINEFORM2 to zero gives the following update DISPLAYFORM0
Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators “weighted" by their expected log probability INLINEFORM3 .
The main difficulty of applying standard variational inference methods to the proposed model is the non-conjugacy between the distribution of the mean topic-assignment INLINEFORM0 and the softmax. Namely, in the expectation DISPLAYFORM0
the second term is intractable to compute. We can make progress by applying Jensen's inequality to bound it as follows DISPLAYFORM0
where INLINEFORM0 , which is constant w.r.t. INLINEFORM1 . This local variational bound can be made tight by noticing that INLINEFORM2 , where equality holds if and only if INLINEFORM3 . Hence, given the current parameter estimates INLINEFORM4 , if we set INLINEFORM5 and INLINEFORM6 then, for an individual parameter INLINEFORM7 , we have that DISPLAYFORM0
Using this local bound to approximate the expectation of the log-sum-exp term, and taking derivatives of the evidence lower bound w.r.t. INLINEFORM0 with the constraint that INLINEFORM1 , yields the following fix-point update DISPLAYFORM0
where INLINEFORM0 denotes the size of the vocabulary. Notice how the per-word variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 .
The variational inference algorithm iterates between Eqs. EQREF25 - EQREF33 until the evidence lower bound, Eq. EQREF24 , converges. Additional details are provided as supplementary material.
The goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM0 , the per-word topic assignments INLINEFORM1 , the per-topic distribution over words INLINEFORM2 and the per-document latent true targets INLINEFORM3 . As we did for the classification model, we shall develop a variational inference algorithm using coordinate ascent. The lower-bound on the log marginal likelihood is now given by DISPLAYFORM0
where INLINEFORM0 are the model parameters. We assume a fully-factorized (mean-field) variational distribution INLINEFORM1 of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are the variational parameters. Notice the new Gaussian term, INLINEFORM5 , corresponding to the approximate posterior distribution of the unobserved true targets.
Optimizing the variational objective INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs. EQREF25 and . Optimizing w.r.t. INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0
where we defined INLINEFORM0 . Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution, which is given by INLINEFORM1 .
The only variables left for doing inference on are then the latent true targets INLINEFORM0 . The variational distribution of INLINEFORM1 is governed by two parameters: a mean INLINEFORM2 and a variance INLINEFORM3 . Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0
Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0
Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be, and the bias-corrected answers of the different annotators weighted by their individual precisions.
As for INLINEFORM0 , we can optimize INLINEFORM1 w.r.t. INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0
and taking derivatives, yielding the update DISPLAYFORM0
Parameter estimation
The model parameters are INLINEFORM0 . The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyper-parameters of the proposed model. As with many works on topic models (e.g. BIBREF26 , BIBREF2 ), we assume hyper-parameters to be fixed, since they can be effectively selected by grid-search procedures which are able to explore well the parameter space without suffering from local optima. Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm. Therefore, in the E-step we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables, and in the M-step we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 . Unfortunately, taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 does not yield a closed-form solution. Hence, we use a numerical method, namely L-BFGS BIBREF27 , to find an optimum. The objective function and gradients are given by DISPLAYFORM0
where, for convenience, we defined the following variable: INLINEFORM0 .
The parameters of the proposed regression model are INLINEFORM0 . As we did for the classification model, we shall assume the Dirichlet parameters, INLINEFORM1 and INLINEFORM2 , to be fixed. Similarly, we shall assume that the variance of the true targets, INLINEFORM3 , to be constant. The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases, INLINEFORM5 , and precisions, INLINEFORM6 , which we estimate using variational Bayesian EM.
Since the latent true targets are now linear functions of the documents' empirical topic mixtures (i.e. there is no softmax function), we can find a closed form solution for the regression coefficients INLINEFORM0 . Taking derivatives of INLINEFORM1 w.r.t. INLINEFORM2 and setting them to zero, gives the following solution for INLINEFORM3 DISPLAYFORM0
where DISPLAYFORM0
We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood. The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0
Taking derivatives w.r.t. INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0
Similarly, we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
The maximum likelihood estimate for the precision (inverse variance) of the INLINEFORM0 annotator is then given by DISPLAYFORM0
Given a set of fitted parameters, it is then straightforward to make predictions for new documents: it is just necessary to infer the (approximate) posterior distribution over the word-topic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA (Eqs. EQREF25 and EQREF42 ), and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 .
Stochastic variational inference
In Section SECREF21 , we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model. This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure, and estimating the global hidden variables. However, this can be inefficient for large datasets, since it requires a full pass through the data at each iteration before updating the global variables. In this section, we develop a stochastic variational inference algorithm BIBREF13 , which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 .
Based on the theory of stochastic optimization BIBREF28 , we can find unbiased estimates of the gradients by subsampling a document (or a mini-batch of documents) from the corpus, and using it to compute the gradients as if that document was observed INLINEFORM0 times. Hence, given an uniformly sampled document INLINEFORM1 , we use the current posterior distributions of the global latent variables, INLINEFORM2 and INLINEFORM3 , and the current coefficient estimates INLINEFORM4 , to compute the posterior distribution over the local hidden variables INLINEFORM5 , INLINEFORM6 and INLINEFORM7 using Eqs. EQREF25 , EQREF33 and EQREF29 respectively. These posteriors are then used to update the global variational parameters, INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients.
Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model. Given an appropriate schedule for the learning rates INLINEFORM0 , such that INLINEFORM1 and INLINEFORM2 , the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 .
[t] Stochastic variational inference for the proposed classification model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 Set t = t + 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq. EQREF33 , for INLINEFORM8 Compute INLINEFORM9 using Eq. EQREF25 Compute INLINEFORM10 using Eq. EQREF29 local parameters INLINEFORM11 , INLINEFORM12 and INLINEFORM13 converge Compute step-size INLINEFORM14 Update topics variational parameters DISPLAYFORM0
Update annotators confusion parameters DISPLAYFORM0
global convergence criterion is met
As we did for the classification model from Section SECREF4 , we can envision developing a stochastic variational inference for the proposed regression model. In this case, the only “global" latent variables are the per-topic distributions over words INLINEFORM0 . As for the “local" latent variables, instead of a single variable INLINEFORM1 , we now have two variables per-document: INLINEFORM2 and INLINEFORM3 . The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 . For added efficiency, one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 , by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the step-size INLINEFORM6 .
[t] Stochastic variational inference for the proposed regression model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 Set t = t + 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq. EQREF64 , for INLINEFORM9 Compute INLINEFORM10 using Eq. EQREF25 Compute INLINEFORM11 using Eq. EQREF66 Compute INLINEFORM12 using Eq. EQREF68 local parameters INLINEFORM13 , INLINEFORM14 and INLINEFORM15 converge Compute step-size INLINEFORM16 Update topics variational parameters DISPLAYFORM0
global convergence criterion is met
Document classification
In order to make predictions for a new (unlabeled) document INLINEFORM0 , we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 . This can be achieved by dropping the terms that involve INLINEFORM3 , INLINEFORM4 and INLINEFORM5 from the model's joint distribution (since, at prediction time, the multi-annotator labels are no longer observed) and averaging over the estimated topics distributions. Letting the topics distribution over words inferred during training be INLINEFORM6 , the joint distribution for a single document is now simply given by DISPLAYFORM0
Deriving a mean-field variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixed-point updates as in LDA BIBREF0 for INLINEFORM1 (Eq. EQREF25 ) and INLINEFORM2 DISPLAYFORM0
Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training, we can make predictions as follows DISPLAYFORM0
This is equivalent to making predictions in the classification version of sLDA BIBREF2 .
Regression model
In this section, we develop a variant of the model proposed in Section SECREF4 for regression problems. We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables. Next, we present a variational inference algorithm, highlighting the differences to the classification version. Finally, we show how to optimize the model parameters.
Experiments
In this section, the proposed multi-annotator supervised LDA models for classification and regression (MA-sLDAc and MA-sLDAr, respectively) are validated using both simulated annotators on popular corpora and using real multiple-annotator labels obtained from Amazon Mechanical Turk. Namely, we shall consider the following real-world problems: classifying posts and news stories; classifying images according to their content; predicting number of stars that a given user gave to a restaurant based on the review; predicting movie ratings using the text of the reviews.
Classification
In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers", “science", “politics" and “recreative". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
The different annotators were simulated by sampling their answers from a multinomial distribution, where the parameters are given by the lines of the annotators' confusion matrices. Hence, for each annotator INLINEFORM0 , we start by pre-defining a confusion matrix INLINEFORM1 with elements INLINEFORM2 , which correspond to the probability that the annotators' answer is INLINEFORM3 given that the true label is INLINEFORM4 , INLINEFORM5 . Then, the answers are sampled i.i.d. from INLINEFORM6 . This procedure was used to simulate 5 different annotators with the following accuracies: 0.737, 0.468, 0.284, 0.278, 0.260. In this experiment, no repeated labelling was used. Hence, each annotator only labels roughly one-fifth of the data. When compared to the ground truth, the simulated answers revealed an accuracy of 0.405. See Table TABREF81 for an overview of the details of the classification datasets used.
Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:
[itemsep=0.02cm]
LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .
LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.
LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.
Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).
sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.
For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.
In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .
The Reuters-21578 is a collection of manually categorized newswire stories with labels such as Acquisitions, Crude-oil, Earnings or Grain. For this experiment, only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label. This resulted in a total of 7016 documents distributed among 8 classes. Of these, 1800 documents were submitted to AMT for multiple annotators to label, giving an average of approximately 3 answers per document (see Table TABREF81 for further details). The remaining 5216 documents were used for testing. The collected answers yield an average worker accuracy of 56.8%. Applying majority voting to these answers reveals a ground truth accuracy of 71.0%. Fig. FIGREF90 shows the boxplots of the number of answers per worker and their accuracies. Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers.
The results obtained by the different approaches are given in Fig. FIGREF91 , where it can be seen that the proposed model (MA-sLDAc) outperforms all the other approaches. For this dataset, the svi algorithm is using mini-batches of 300 documents.
The proposed model was also validated using a dataset from the computer vision domain: LabelMe BIBREF31 . In contrast to the Reuters and Newsgroups corpora, LabelMe is an open online tool to annotate images. Hence, this experiment allows us to see how the proposed model generalizes beyond non-textual data. Using the Matlab interface provided in the projects' website, we extracted a subset of the LabelMe data, consisting of all the 256 x 256 images with the categories: “highway", “inside city", “tall building", “street", “forest", “coast", “mountain" or “open country". This allowed us to collect a total of 2688 labeled images. Of these, 1000 images were given to AMT workers to classify with one of the classes above. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. When majority voting is applied to the collected answers, a ground truth accuracy of 76.9% is obtained. Fig. FIGREF92 shows the boxplots of the number of answers per worker and their accuracies. Interestingly, the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters-21578 data (see Fig. FIGREF90 ), which suggests that this is an easier task for the AMT workers.
The preprocessing of the images used is similar to the approach in BIBREF1 . It uses 128-dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel. This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels. The 128-dimensional SIFT descriptors produced by the sliding window are then fed to a k-means algorithm (with k=200) in order construct a vocabulary of 200 “visual words". This allows us to represent the images with a bag of visual words model.
With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:
Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).
The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.
Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.
In order to verify that the proposed model was estimating the (normalized) confusion matrices INLINEFORM0 of the different workers correctly, a random sample of them was plotted against the true confusion matrices (i.e. the normalized confusion matrices evaluated against the true labels). Figure FIGREF95 shows the results obtained with 60 topics on the Reuters-21578 dataset, where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 (the supplementary material provides a similar figure for the LabelMe dataset). Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make, with several workers showing clear biases (e.g. workers 3 and 4), while others made mistakes more randomly (e.g. worker 1). Nevertheless, the proposed is able to capture these patterns correctly and account for effect.
To gain further insights, Table TABREF96 shows 4 example images from the LabelMe dataset, along with their true labels, the answers provided by the different workers, the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator ( INLINEFORM0 for INLINEFORM1 ) using a color-coding scheme similar to Fig. FIGREF95 . In the first example, although majority voting suggests “inside city" to be the correct label, we can see that the model has learned that annotators 32 and 43 are very likely to provide the label “inside city" when the true label is actually “street", and it is able to leverage that fact to infer that the correct label is “street". Similarly, in the second image the model is able to infer the correct true label from 3 conflicting labels. However, in the third image the model is not able to recover the correct true class, which can be explained by it not having enough evidence about the annotators and their reliabilities and biases (likelihood distribution for these cases is uniform). In fact, this raises interesting questions regarding requirements for the minimum number of labels per annotator, their reliabilities and their coherence. Finally, for the fourth image, somehow surprisingly, the model is able to infer the correct true class, even though all 3 annotators labeled it as “inside city".
Regression
As for proposed classification model, we start by validating MA-sLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict. For this purpose, we shall consider a dataset of user-submitted restaurant reviews from the website we8there.com. This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews. For each review, there is a five-star rating on four specific aspects of quality (food, service, value, and atmosphere) as well as the overall experience. Our goal is then to predict the overall experience of the user based on his comments in the review. We apply the same preprocessing as in BIBREF18 , which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews. The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
As with the classification model, we seek to simulate an heterogeneous set of annotators in terms of reliability and bias. Hence, in order to simulate an annotator INLINEFORM0 , we proceed as follows: let INLINEFORM1 be the true review of the restaurant; we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers, depending on what type of annotator we wish to simulate (see Fig. FIGREF45 ); we then sample a simulated answer as INLINEFORM4 . Using this procedure, we simulated 5 annotators with the following (bias, precision) pairs: (0.1, 10), (-0.3, 3), (-2.5, 10), (0.1, 0.5) and (1, 0.25). The goal is to have 2 good annotators (low bias, high precision), 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased. The coefficients of determination ( INLINEFORM5 ) of the simulated annotators are: [0.940, 0.785, -2.469, -0.131, -1.749]. Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0.798. Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments.
We compare the proposed model (MA-sLDAr) with the two following baselines:
[itemsep=0.02cm]
LDA + LinReg (mean): This baseline corresponds to applying unsupervised LDA to the data, and learning a linear regression model on the inferred topics distributions of the documents. The answers from the different annotators were aggregated by computing the mean.
sLDA (mean): This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators' answers.
Fig. FIGREF102 shows the results obtained for different numbers of topics. Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm, we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation. Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference (svi) algorithm, we only made experiments using the batch algorithm developed in Section SECREF61 . The results obtained clearly show the improved performance of MA-sLDAr over the other methods.
The proposed multi-annotator regression model (MA-sLDAr) was also validated with real annotators by using AMT. For that purpose, the movie review dataset from BIBREF35 was used. This dataset consists of 5006 movie reviews along with their respective star rating (from 1 to 10). The goal of this experiment is then predict how much a person liked a movie based on what she says about it. We ask workers to guess how much they think the writer of the review liked the movie based on her comments. An average of 4.96 answers per-review was collected for a total of 1500 reviews. The remaining reviews were used for testing. In average, each worker rated approximately 55 reviews. Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0.830. Table TABREF99 gives an overview of the statistics of this data. Fig. FIGREF104 shows boxplots of the number of answers per worker, as well as boxplots of their respective biases ( INLINEFORM1 ) and variances (inverse precisions, INLINEFORM2 ).
The preprocessing of the text consisted of stemming and stop-words removal. Using the preprocessed data, the proposed MA-sLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 . Fig. FIGREF105 shows the results obtained for different numbers of topics. These results show that the proposed model outperforms all the other baselines.
With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly, we plotted the true values against the estimates of MA-sLDAr with 60 topics for a random subset of 10 workers. Fig. FIGREF106 shows the obtained results, where higher color intensities indicate higher values. Ideally, the colour of two horizontally-adjacent squares would then be of similar shades, and this is indeed what happens in practice for the majority of the workers, as Fig. FIGREF106 shows. Interestingly, the figure also shows that there are a couple of workers that are considerably biased (e.g. workers 6 and 8) and that those biases are being correctly estimated, thus justifying the inclusion of a bias parameter in the proposed model, which contrasts with previous works BIBREF21 , BIBREF23 .
Conclusion
This article proposed a supervised topic model that is able to learn from multiple annotators and crowds, by accounting for their biases and different levels of expertise. Given the large sizes of modern datasets, and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates, generally involve complex high-dimensional data such as text and images, the proposed model constitutes a strong contribution for the multi-annotator paradigm. This model is then capable of jointly modeling the words in documents as arising from a mixture of topics, as well as the latent true target variables and the (noisy) answers of the multiple annotators. We developed two distinct models, one for classification and another for regression, which share similar intuitions but that inevitably differ due to the nature of the target variables. We empirically showed, using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform state-of-the-art approaches in several real-world problems, such as classifying posts, news stories and images, or predicting the number of stars of restaurant and the rating of movie based on their reviews. For this, we use various popular datasets from the state-of-the-art, that are commonly used for benchmarking machine learning algorithms. Finally, an efficient stochastic variational inference algorithm was described, which gives the proposed models the ability to scale to large datasets.
Acknowledgment
The Fundação para a Ciência e Tecnologia (FCT) is gratefully acknowledged for founding this work with the grants SFRH/BD/78396/2011 and PTDC/ECM-TRA/1898/2012 (InfoCROWDS).
[]Mariana Lourenço has a MSc degree in Informatics Engineering from University of Coimbra, Portugal. Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city. Her main research interests are machine learning, pattern recognition and natural language processing.
[]Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department, University of Coimbra in Portugal, from where she received a D.Sc. in Informatics Engineering, a Ph.D. in Electrical Engineering, speciality of Informatics, and a MSc in Computer Science. Her research interests are in the areas of Machine Learning, Pattern Recognition and Signal Processing and their applications to a broad range of fields. She was responsible/participated in several research projects in a wide range of application areas such as Text Classification, Financial, Biomedical and Bioinformatics. Bernardete Ribeiro is IEEE Senior Member, and member of IARP International Association of Pattern Recognition and ACM.
[]Francisco C. Pereira is Full Professor at the Technical University of Denmark (DTU), where he leads the Smart Mobility research group. His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior, and modeling and optimizing the transportation system as a whole. He has Master€™s (2000) and Ph.D. (2005) degrees in Computer Science from University of Coimbra, and has authored/co-authored over 70 journal and conference papers in areas such as pattern recognition, transportation, knowledge based systems and cognitive science. Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra. He was awarded several prestigious prizes, including an IEEE Achievements award, in 2009, the Singapore GYSS Challenge in 2013, and the Pyke Johnson award from Transportation Research Board, in 2015. | Bosch 2006 (mv), LDA + LogReg (mv), LDA + Raykar, LDA + Rodrigues, Blei 2003 (mv), sLDA (mv) |
f7789313a804e41fcbca906a4e5cf69039eeef9f | f7789313a804e41fcbca906a4e5cf69039eeef9f_0 | Q: what datasets were used?
Text: Introduction
Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 .
Since documents are frequently associated with other variables such as labels, tags or ratings, much interest has been placed on supervised topic models BIBREF6 , which allow the use of that extra information to “guide" the topics discovery. By jointly learning the topics distributions and a classification or regression model, supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regression/classification algorithm BIBREF2 , BIBREF7 .
Supervised topics models are then state-of-the-art approaches for predicting target variables associated with complex high-dimensional data, such as documents or images. Unfortunately, the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the real-world applications that involve some form of human labeling. For instance, the popular Reuters-21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group, Inc. Similarly, the LabelMe project asks volunteers to annotate images from a large collection using an online tool. Hence, it is seldom the case where a single oracle labels an entire collection.
Furthermore, the Web, through its social nature, also exploits the wisdom of crowds to annotate large collections of documents and images. By categorizing texts, tagging images or rating products and places, Web users are generating large volumes of labeled content. However, when learning supervised models from crowds, the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability (or bias) BIBREF8 , BIBREF9 . If we consider a sentiment analysis task, it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators. Similarly, online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewer's opinions. It is therefore essential to account for these issues when learning from this increasingly common type of data. Hence, the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years (e.g. BIBREF10 , BIBREF11 ).
The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk (AMT) has further contributed to the recent advances in learning from crowds. This kind of platforms offers a fast, scalable and inexpensive solution for labeling large amounts of data. However, their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias. Hence, a careless use of crowdsourced data as training data risks generating flawed models.
In this article, we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases. The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics, the latent true target variables as a result of the empirical distribution over topics of the documents, and the labels of the multiple annotators as noisy versions of that latent ground truth. We propose two different models, one for classification BIBREF12 and another for regression problems, thus covering a very wide range of possible practical applications, as we empirically demonstrate. Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text, images and video, by developing a multi-annotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex high-dimensional data from crowds.
Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference, we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets. We empirically show, using both simulated and real multiple-annotator labels obtained from AMT for popular text and image collections, that the proposed models are able to outperform other state-of-the-art approaches in both classification and regression tasks. We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart.
Supervised topic models
Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression.
From a classification perspective, there are several ways in which document classes can be included in LDA. The most natural one in this setting is probably the sLDA approach, since the classes are directly dependent on the empirical topic mixture distributions. This approach is coherent with the generative perspective of LDA but, nevertheless, several discriminative alternatives also exist. For example, DiscLDA BIBREF14 introduces a class-dependent linear transformation on the topic mixture proportions of each document, such that the per-word topic assignments are drawn from linearly transformed mixture proportions. The class-specific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions. The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 .
An alternative way of including classes in LDA for supervision is the one proposed in the Labeled-LDA model BIBREF15 . Labeled-LDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set. While this allows for multiple labels per document, it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels.
From a regression perspective, other than sLDA, the most relevant approaches are the Dirichlet-multimonial regression BIBREF16 and the inverse regression topic models BIBREF17 . The Dirichlet-multimonial regression (DMR) topic model BIBREF16 includes a log-linear prior on the document's mixture proportions that is a function of a set of arbitrary features, such as author, date, publication venue or references in scientific articles. The inferred Dirichlet-multinomial distribution can then be used to make predictions about the values of theses features. The inverse regression topic model (IRTM) BIBREF17 is a mixed-membership extension of the multinomial inverse regression (MNIR) model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis. However, this results in a rather complex and inefficient inference procedure. Furthermore, making predictions in the IRTM is not trivial. For example, MAP estimates of targets will be in a different scale than the original document's metadata. Hence, the authors propose the use of a linear model to regress metadata values onto their MAP predictions.
The approaches discussed so far rely on likelihood-based estimation procedures. The work in BIBREF7 contrasts with these approaches by proposing MedLDA, a supervised topic model that utilizes the max-margin principle for estimation. Despite its margin-based advantages, MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions. On the contrary, in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics.
Learning from multiple annotators
Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier.
While earlier works usually focused on estimating the ground truth and the error rates of different annotators, recent works are more focused on the problem of learning classifiers using multiple-annotator data. This idea was explored by Raykar et al. BIBREF21 , who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier, by modeling the ground truth labels as latent variables. This work was later extended in BIBREF11 by considering the dependencies of the annotators' labels on the instances they are labeling, and also in BIBREF22 through the use of Gaussian process classifiers. The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables. However, it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators. Furthermore, it considers the problems of learning a low-dimensional representation of the input data (through topic modeling) and modeling the answers of multiple annotators jointly, providing an efficient stochastic variational inference algorithm.
Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators, for continuous response variables this problem has been approached in a much smaller extent. For example, Groot et al. BIBREF23 address this problem in the context of Gaussian processes. In their work, the authors assign a different variance to the likelihood of the data points provided by the different annotators, thereby allowing them to have different noise levels, which can be estimated by maximizing the marginal likelihood of the data. Similarly, the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators. In this article, we take this idea one step further by also considering a per-annotator bias parameter, which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common, for example, in product ratings and document reviews. Furthermore, we empirically validate the proposed model using real multi-annotator data obtained from Amazon Mechanical Turk. This contrasts with the previously mentioned works, which rely only on simulated annotators.
Classification model
In this section, we develop a multi-annotator supervised topic model for classification problems. The model for regression settings will be presented in Section SECREF5 . We start by deriving a (batch) variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters. We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents. Finally, we show how to use the learned model to classify new documents.
Proposed model
Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 , where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators. We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics, each being defined as a distribution over the words in a vocabulary, as in LDA. In LDA, the INLINEFORM5 word, INLINEFORM6 , in a document INLINEFORM7 is provided a discrete topic-assignment INLINEFORM8 , which is drawn from the documents' distribution over topics INLINEFORM9 . This allows us to build lower-dimensional representations of documents, which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topic-assignment of the words in the document, INLINEFORM11 , and applying a softmax function in order to obtain a distribution over classes. Alternatively, one could consider more flexible models such as Gaussian processes, however that would considerably increase the complexity of inference.
Unfortunately, a direct mapping between document classes and the labels provided by the different annotators in a multiple-annotator setting would correspond to assuming that they are all equally reliable, an assumption that is violated in practice, as previous works clearly demonstrate (e.g. BIBREF8 , BIBREF9 ). Hence, we assume the existence of a latent ground truth class, and model the labels from the different annotators using a noise model that states that, given a true class INLINEFORM0 , each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 . Hence, by modeling the matrix INLINEFORM4 we are in fact modeling a per-annotator (normalized) confusion matrix, which allows us to account for their different levels of expertise and correct their potential biases.
The generative process of the proposed model for classification problems can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) class INLINEFORM0
For each annotator INLINEFORM0
Draw annotator's label INLINEFORM0
where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document, INLINEFORM2 , and the softmax is given by DISPLAYFORM0
Fig. FIGREF20 shows a graphical model representation of the proposed model, where INLINEFORM0 denotes the number of topics, INLINEFORM1 is the number of classes, INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 . Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters. Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity. Similarly, instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 , we place a Dirichlet prior over these variables and perform approximate Bayesian inference. This contrasts with previous works on learning classification models from crowds BIBREF21 , BIBREF24 .
For developing a multi-annotator supervised topic model for regression, we shall follow a similar intuition as the one we considered for classification. Namely, we shall assume that, for a given document INLINEFORM0 , each annotator provides a noisy version, INLINEFORM1 , of the true (continuous) target variable, which we denote by INLINEFORM2 . This can be, for example, the true rating of a product or the true sentiment of a document. Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 (inverse variance), and assuming a Gaussian noise model for the annotators' answers, we have that DISPLAYFORM0
This approach is therefore more powerful than previous works BIBREF21 , BIBREF23 , where a single precision parameter was used to model the annotators' expertise. Fig. FIGREF45 illustrates this intuition for 4 annotators, represented by different colors. The “green annotator" is the best one, since he is right on the target and his answers vary very little (low bias, high precision). The “yellow annotator" has a low bias, but his answers are very uncertain, as they can vary a lot. Contrarily, the “blue annotator" is very precise, but consistently over-estimates the true target (high bias, high precision). Finally, the “red annotator" corresponds to the worst kind of annotator (with high bias and low precision).
Having specified a model for annotators answers given the true targets, the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 . For this, we shall keep things simple and assume a linear model as in sLDA BIBREF6 . The generative process of the proposed model for continuous target variables can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) target INLINEFORM0
For each annotator INLINEFORM0
Draw answer INLINEFORM0
Fig. FIGREF60 shows a graphical representation of the proposed model.
Approximate inference
Given a dataset INLINEFORM0 , the goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM1 , the per-word topic assignments INLINEFORM2 , the per-topic distribution over words INLINEFORM3 , the per-document latent true class INLINEFORM4 , and the per-annotator confusion parameters INLINEFORM5 . As with LDA, computing the exact posterior distribution of the latent variables is computationally intractable. Hence, we employ mean-field variational inference to perform approximate Bayesian inference.
Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution. We assume a fully-factorized (mean-field) variational distribution of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are variational parameters. Table TABREF23 shows the correspondence between variational parameters and the original parameters.
Let INLINEFORM0 denote the model parameters. Following BIBREF25 , the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0
which we maximize using coordinate ascent.
Optimizing INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0
The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator, INLINEFORM2 , and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions, respectively. Taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 and setting them to zero, yields the following update DISPLAYFORM0
Similarly, the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 . Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives w.r.t. INLINEFORM2 to zero gives the following update DISPLAYFORM0
Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators “weighted" by their expected log probability INLINEFORM3 .
The main difficulty of applying standard variational inference methods to the proposed model is the non-conjugacy between the distribution of the mean topic-assignment INLINEFORM0 and the softmax. Namely, in the expectation DISPLAYFORM0
the second term is intractable to compute. We can make progress by applying Jensen's inequality to bound it as follows DISPLAYFORM0
where INLINEFORM0 , which is constant w.r.t. INLINEFORM1 . This local variational bound can be made tight by noticing that INLINEFORM2 , where equality holds if and only if INLINEFORM3 . Hence, given the current parameter estimates INLINEFORM4 , if we set INLINEFORM5 and INLINEFORM6 then, for an individual parameter INLINEFORM7 , we have that DISPLAYFORM0
Using this local bound to approximate the expectation of the log-sum-exp term, and taking derivatives of the evidence lower bound w.r.t. INLINEFORM0 with the constraint that INLINEFORM1 , yields the following fix-point update DISPLAYFORM0
where INLINEFORM0 denotes the size of the vocabulary. Notice how the per-word variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 .
The variational inference algorithm iterates between Eqs. EQREF25 - EQREF33 until the evidence lower bound, Eq. EQREF24 , converges. Additional details are provided as supplementary material.
The goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM0 , the per-word topic assignments INLINEFORM1 , the per-topic distribution over words INLINEFORM2 and the per-document latent true targets INLINEFORM3 . As we did for the classification model, we shall develop a variational inference algorithm using coordinate ascent. The lower-bound on the log marginal likelihood is now given by DISPLAYFORM0
where INLINEFORM0 are the model parameters. We assume a fully-factorized (mean-field) variational distribution INLINEFORM1 of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are the variational parameters. Notice the new Gaussian term, INLINEFORM5 , corresponding to the approximate posterior distribution of the unobserved true targets.
Optimizing the variational objective INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs. EQREF25 and . Optimizing w.r.t. INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0
where we defined INLINEFORM0 . Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution, which is given by INLINEFORM1 .
The only variables left for doing inference on are then the latent true targets INLINEFORM0 . The variational distribution of INLINEFORM1 is governed by two parameters: a mean INLINEFORM2 and a variance INLINEFORM3 . Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0
Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0
Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be, and the bias-corrected answers of the different annotators weighted by their individual precisions.
As for INLINEFORM0 , we can optimize INLINEFORM1 w.r.t. INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0
and taking derivatives, yielding the update DISPLAYFORM0
Parameter estimation
The model parameters are INLINEFORM0 . The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyper-parameters of the proposed model. As with many works on topic models (e.g. BIBREF26 , BIBREF2 ), we assume hyper-parameters to be fixed, since they can be effectively selected by grid-search procedures which are able to explore well the parameter space without suffering from local optima. Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm. Therefore, in the E-step we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables, and in the M-step we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 . Unfortunately, taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 does not yield a closed-form solution. Hence, we use a numerical method, namely L-BFGS BIBREF27 , to find an optimum. The objective function and gradients are given by DISPLAYFORM0
where, for convenience, we defined the following variable: INLINEFORM0 .
The parameters of the proposed regression model are INLINEFORM0 . As we did for the classification model, we shall assume the Dirichlet parameters, INLINEFORM1 and INLINEFORM2 , to be fixed. Similarly, we shall assume that the variance of the true targets, INLINEFORM3 , to be constant. The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases, INLINEFORM5 , and precisions, INLINEFORM6 , which we estimate using variational Bayesian EM.
Since the latent true targets are now linear functions of the documents' empirical topic mixtures (i.e. there is no softmax function), we can find a closed form solution for the regression coefficients INLINEFORM0 . Taking derivatives of INLINEFORM1 w.r.t. INLINEFORM2 and setting them to zero, gives the following solution for INLINEFORM3 DISPLAYFORM0
where DISPLAYFORM0
We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood. The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0
Taking derivatives w.r.t. INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0
Similarly, we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
The maximum likelihood estimate for the precision (inverse variance) of the INLINEFORM0 annotator is then given by DISPLAYFORM0
Given a set of fitted parameters, it is then straightforward to make predictions for new documents: it is just necessary to infer the (approximate) posterior distribution over the word-topic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA (Eqs. EQREF25 and EQREF42 ), and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 .
Stochastic variational inference
In Section SECREF21 , we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model. This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure, and estimating the global hidden variables. However, this can be inefficient for large datasets, since it requires a full pass through the data at each iteration before updating the global variables. In this section, we develop a stochastic variational inference algorithm BIBREF13 , which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 .
Based on the theory of stochastic optimization BIBREF28 , we can find unbiased estimates of the gradients by subsampling a document (or a mini-batch of documents) from the corpus, and using it to compute the gradients as if that document was observed INLINEFORM0 times. Hence, given an uniformly sampled document INLINEFORM1 , we use the current posterior distributions of the global latent variables, INLINEFORM2 and INLINEFORM3 , and the current coefficient estimates INLINEFORM4 , to compute the posterior distribution over the local hidden variables INLINEFORM5 , INLINEFORM6 and INLINEFORM7 using Eqs. EQREF25 , EQREF33 and EQREF29 respectively. These posteriors are then used to update the global variational parameters, INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients.
Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model. Given an appropriate schedule for the learning rates INLINEFORM0 , such that INLINEFORM1 and INLINEFORM2 , the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 .
[t] Stochastic variational inference for the proposed classification model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 Set t = t + 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq. EQREF33 , for INLINEFORM8 Compute INLINEFORM9 using Eq. EQREF25 Compute INLINEFORM10 using Eq. EQREF29 local parameters INLINEFORM11 , INLINEFORM12 and INLINEFORM13 converge Compute step-size INLINEFORM14 Update topics variational parameters DISPLAYFORM0
Update annotators confusion parameters DISPLAYFORM0
global convergence criterion is met
As we did for the classification model from Section SECREF4 , we can envision developing a stochastic variational inference for the proposed regression model. In this case, the only “global" latent variables are the per-topic distributions over words INLINEFORM0 . As for the “local" latent variables, instead of a single variable INLINEFORM1 , we now have two variables per-document: INLINEFORM2 and INLINEFORM3 . The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 . For added efficiency, one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 , by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the step-size INLINEFORM6 .
[t] Stochastic variational inference for the proposed regression model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 Set t = t + 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq. EQREF64 , for INLINEFORM9 Compute INLINEFORM10 using Eq. EQREF25 Compute INLINEFORM11 using Eq. EQREF66 Compute INLINEFORM12 using Eq. EQREF68 local parameters INLINEFORM13 , INLINEFORM14 and INLINEFORM15 converge Compute step-size INLINEFORM16 Update topics variational parameters DISPLAYFORM0
global convergence criterion is met
Document classification
In order to make predictions for a new (unlabeled) document INLINEFORM0 , we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 . This can be achieved by dropping the terms that involve INLINEFORM3 , INLINEFORM4 and INLINEFORM5 from the model's joint distribution (since, at prediction time, the multi-annotator labels are no longer observed) and averaging over the estimated topics distributions. Letting the topics distribution over words inferred during training be INLINEFORM6 , the joint distribution for a single document is now simply given by DISPLAYFORM0
Deriving a mean-field variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixed-point updates as in LDA BIBREF0 for INLINEFORM1 (Eq. EQREF25 ) and INLINEFORM2 DISPLAYFORM0
Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training, we can make predictions as follows DISPLAYFORM0
This is equivalent to making predictions in the classification version of sLDA BIBREF2 .
Regression model
In this section, we develop a variant of the model proposed in Section SECREF4 for regression problems. We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables. Next, we present a variational inference algorithm, highlighting the differences to the classification version. Finally, we show how to optimize the model parameters.
Experiments
In this section, the proposed multi-annotator supervised LDA models for classification and regression (MA-sLDAc and MA-sLDAr, respectively) are validated using both simulated annotators on popular corpora and using real multiple-annotator labels obtained from Amazon Mechanical Turk. Namely, we shall consider the following real-world problems: classifying posts and news stories; classifying images according to their content; predicting number of stars that a given user gave to a restaurant based on the review; predicting movie ratings using the text of the reviews.
Classification
In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers", “science", “politics" and “recreative". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
The different annotators were simulated by sampling their answers from a multinomial distribution, where the parameters are given by the lines of the annotators' confusion matrices. Hence, for each annotator INLINEFORM0 , we start by pre-defining a confusion matrix INLINEFORM1 with elements INLINEFORM2 , which correspond to the probability that the annotators' answer is INLINEFORM3 given that the true label is INLINEFORM4 , INLINEFORM5 . Then, the answers are sampled i.i.d. from INLINEFORM6 . This procedure was used to simulate 5 different annotators with the following accuracies: 0.737, 0.468, 0.284, 0.278, 0.260. In this experiment, no repeated labelling was used. Hence, each annotator only labels roughly one-fifth of the data. When compared to the ground truth, the simulated answers revealed an accuracy of 0.405. See Table TABREF81 for an overview of the details of the classification datasets used.
Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:
[itemsep=0.02cm]
LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .
LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.
LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.
Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).
sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.
For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.
In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .
The Reuters-21578 is a collection of manually categorized newswire stories with labels such as Acquisitions, Crude-oil, Earnings or Grain. For this experiment, only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label. This resulted in a total of 7016 documents distributed among 8 classes. Of these, 1800 documents were submitted to AMT for multiple annotators to label, giving an average of approximately 3 answers per document (see Table TABREF81 for further details). The remaining 5216 documents were used for testing. The collected answers yield an average worker accuracy of 56.8%. Applying majority voting to these answers reveals a ground truth accuracy of 71.0%. Fig. FIGREF90 shows the boxplots of the number of answers per worker and their accuracies. Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers.
The results obtained by the different approaches are given in Fig. FIGREF91 , where it can be seen that the proposed model (MA-sLDAc) outperforms all the other approaches. For this dataset, the svi algorithm is using mini-batches of 300 documents.
The proposed model was also validated using a dataset from the computer vision domain: LabelMe BIBREF31 . In contrast to the Reuters and Newsgroups corpora, LabelMe is an open online tool to annotate images. Hence, this experiment allows us to see how the proposed model generalizes beyond non-textual data. Using the Matlab interface provided in the projects' website, we extracted a subset of the LabelMe data, consisting of all the 256 x 256 images with the categories: “highway", “inside city", “tall building", “street", “forest", “coast", “mountain" or “open country". This allowed us to collect a total of 2688 labeled images. Of these, 1000 images were given to AMT workers to classify with one of the classes above. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. When majority voting is applied to the collected answers, a ground truth accuracy of 76.9% is obtained. Fig. FIGREF92 shows the boxplots of the number of answers per worker and their accuracies. Interestingly, the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters-21578 data (see Fig. FIGREF90 ), which suggests that this is an easier task for the AMT workers.
The preprocessing of the images used is similar to the approach in BIBREF1 . It uses 128-dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel. This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels. The 128-dimensional SIFT descriptors produced by the sliding window are then fed to a k-means algorithm (with k=200) in order construct a vocabulary of 200 “visual words". This allows us to represent the images with a bag of visual words model.
With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:
Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).
The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.
Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.
In order to verify that the proposed model was estimating the (normalized) confusion matrices INLINEFORM0 of the different workers correctly, a random sample of them was plotted against the true confusion matrices (i.e. the normalized confusion matrices evaluated against the true labels). Figure FIGREF95 shows the results obtained with 60 topics on the Reuters-21578 dataset, where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 (the supplementary material provides a similar figure for the LabelMe dataset). Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make, with several workers showing clear biases (e.g. workers 3 and 4), while others made mistakes more randomly (e.g. worker 1). Nevertheless, the proposed is able to capture these patterns correctly and account for effect.
To gain further insights, Table TABREF96 shows 4 example images from the LabelMe dataset, along with their true labels, the answers provided by the different workers, the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator ( INLINEFORM0 for INLINEFORM1 ) using a color-coding scheme similar to Fig. FIGREF95 . In the first example, although majority voting suggests “inside city" to be the correct label, we can see that the model has learned that annotators 32 and 43 are very likely to provide the label “inside city" when the true label is actually “street", and it is able to leverage that fact to infer that the correct label is “street". Similarly, in the second image the model is able to infer the correct true label from 3 conflicting labels. However, in the third image the model is not able to recover the correct true class, which can be explained by it not having enough evidence about the annotators and their reliabilities and biases (likelihood distribution for these cases is uniform). In fact, this raises interesting questions regarding requirements for the minimum number of labels per annotator, their reliabilities and their coherence. Finally, for the fourth image, somehow surprisingly, the model is able to infer the correct true class, even though all 3 annotators labeled it as “inside city".
Regression
As for proposed classification model, we start by validating MA-sLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict. For this purpose, we shall consider a dataset of user-submitted restaurant reviews from the website we8there.com. This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews. For each review, there is a five-star rating on four specific aspects of quality (food, service, value, and atmosphere) as well as the overall experience. Our goal is then to predict the overall experience of the user based on his comments in the review. We apply the same preprocessing as in BIBREF18 , which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews. The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
As with the classification model, we seek to simulate an heterogeneous set of annotators in terms of reliability and bias. Hence, in order to simulate an annotator INLINEFORM0 , we proceed as follows: let INLINEFORM1 be the true review of the restaurant; we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers, depending on what type of annotator we wish to simulate (see Fig. FIGREF45 ); we then sample a simulated answer as INLINEFORM4 . Using this procedure, we simulated 5 annotators with the following (bias, precision) pairs: (0.1, 10), (-0.3, 3), (-2.5, 10), (0.1, 0.5) and (1, 0.25). The goal is to have 2 good annotators (low bias, high precision), 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased. The coefficients of determination ( INLINEFORM5 ) of the simulated annotators are: [0.940, 0.785, -2.469, -0.131, -1.749]. Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0.798. Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments.
We compare the proposed model (MA-sLDAr) with the two following baselines:
[itemsep=0.02cm]
LDA + LinReg (mean): This baseline corresponds to applying unsupervised LDA to the data, and learning a linear regression model on the inferred topics distributions of the documents. The answers from the different annotators were aggregated by computing the mean.
sLDA (mean): This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators' answers.
Fig. FIGREF102 shows the results obtained for different numbers of topics. Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm, we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation. Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference (svi) algorithm, we only made experiments using the batch algorithm developed in Section SECREF61 . The results obtained clearly show the improved performance of MA-sLDAr over the other methods.
The proposed multi-annotator regression model (MA-sLDAr) was also validated with real annotators by using AMT. For that purpose, the movie review dataset from BIBREF35 was used. This dataset consists of 5006 movie reviews along with their respective star rating (from 1 to 10). The goal of this experiment is then predict how much a person liked a movie based on what she says about it. We ask workers to guess how much they think the writer of the review liked the movie based on her comments. An average of 4.96 answers per-review was collected for a total of 1500 reviews. The remaining reviews were used for testing. In average, each worker rated approximately 55 reviews. Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0.830. Table TABREF99 gives an overview of the statistics of this data. Fig. FIGREF104 shows boxplots of the number of answers per worker, as well as boxplots of their respective biases ( INLINEFORM1 ) and variances (inverse precisions, INLINEFORM2 ).
The preprocessing of the text consisted of stemming and stop-words removal. Using the preprocessed data, the proposed MA-sLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 . Fig. FIGREF105 shows the results obtained for different numbers of topics. These results show that the proposed model outperforms all the other baselines.
With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly, we plotted the true values against the estimates of MA-sLDAr with 60 topics for a random subset of 10 workers. Fig. FIGREF106 shows the obtained results, where higher color intensities indicate higher values. Ideally, the colour of two horizontally-adjacent squares would then be of similar shades, and this is indeed what happens in practice for the majority of the workers, as Fig. FIGREF106 shows. Interestingly, the figure also shows that there are a couple of workers that are considerably biased (e.g. workers 6 and 8) and that those biases are being correctly estimated, thus justifying the inclusion of a bias parameter in the proposed model, which contrasts with previous works BIBREF21 , BIBREF23 .
Conclusion
This article proposed a supervised topic model that is able to learn from multiple annotators and crowds, by accounting for their biases and different levels of expertise. Given the large sizes of modern datasets, and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates, generally involve complex high-dimensional data such as text and images, the proposed model constitutes a strong contribution for the multi-annotator paradigm. This model is then capable of jointly modeling the words in documents as arising from a mixture of topics, as well as the latent true target variables and the (noisy) answers of the multiple annotators. We developed two distinct models, one for classification and another for regression, which share similar intuitions but that inevitably differ due to the nature of the target variables. We empirically showed, using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform state-of-the-art approaches in several real-world problems, such as classifying posts, news stories and images, or predicting the number of stars of restaurant and the rating of movie based on their reviews. For this, we use various popular datasets from the state-of-the-art, that are commonly used for benchmarking machine learning algorithms. Finally, an efficient stochastic variational inference algorithm was described, which gives the proposed models the ability to scale to large datasets.
Acknowledgment
The Fundação para a Ciência e Tecnologia (FCT) is gratefully acknowledged for founding this work with the grants SFRH/BD/78396/2011 and PTDC/ECM-TRA/1898/2012 (InfoCROWDS).
[]Mariana Lourenço has a MSc degree in Informatics Engineering from University of Coimbra, Portugal. Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city. Her main research interests are machine learning, pattern recognition and natural language processing.
[]Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department, University of Coimbra in Portugal, from where she received a D.Sc. in Informatics Engineering, a Ph.D. in Electrical Engineering, speciality of Informatics, and a MSc in Computer Science. Her research interests are in the areas of Machine Learning, Pattern Recognition and Signal Processing and their applications to a broad range of fields. She was responsible/participated in several research projects in a wide range of application areas such as Text Classification, Financial, Biomedical and Bioinformatics. Bernardete Ribeiro is IEEE Senior Member, and member of IARP International Association of Pattern Recognition and ACM.
[]Francisco C. Pereira is Full Professor at the Technical University of Denmark (DTU), where he leads the Smart Mobility research group. His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior, and modeling and optimizing the transportation system as a whole. He has Master€™s (2000) and Ph.D. (2005) degrees in Computer Science from University of Coimbra, and has authored/co-authored over 70 journal and conference papers in areas such as pattern recognition, transportation, knowledge based systems and cognitive science. Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra. He was awarded several prestigious prizes, including an IEEE Achievements award, in 2009, the Singapore GYSS Challenge in 2013, and the Pyke Johnson award from Transportation Research Board, in 2015. | Reuters-21578 BIBREF30, LabelMe BIBREF31, 20-Newsgroups benchmark corpus BIBREF29 |
f7789313a804e41fcbca906a4e5cf69039eeef9f | f7789313a804e41fcbca906a4e5cf69039eeef9f_1 | Q: what datasets were used?
Text: Introduction
Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 .
Since documents are frequently associated with other variables such as labels, tags or ratings, much interest has been placed on supervised topic models BIBREF6 , which allow the use of that extra information to “guide" the topics discovery. By jointly learning the topics distributions and a classification or regression model, supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regression/classification algorithm BIBREF2 , BIBREF7 .
Supervised topics models are then state-of-the-art approaches for predicting target variables associated with complex high-dimensional data, such as documents or images. Unfortunately, the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the real-world applications that involve some form of human labeling. For instance, the popular Reuters-21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group, Inc. Similarly, the LabelMe project asks volunteers to annotate images from a large collection using an online tool. Hence, it is seldom the case where a single oracle labels an entire collection.
Furthermore, the Web, through its social nature, also exploits the wisdom of crowds to annotate large collections of documents and images. By categorizing texts, tagging images or rating products and places, Web users are generating large volumes of labeled content. However, when learning supervised models from crowds, the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability (or bias) BIBREF8 , BIBREF9 . If we consider a sentiment analysis task, it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators. Similarly, online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewer's opinions. It is therefore essential to account for these issues when learning from this increasingly common type of data. Hence, the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years (e.g. BIBREF10 , BIBREF11 ).
The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk (AMT) has further contributed to the recent advances in learning from crowds. This kind of platforms offers a fast, scalable and inexpensive solution for labeling large amounts of data. However, their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias. Hence, a careless use of crowdsourced data as training data risks generating flawed models.
In this article, we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases. The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics, the latent true target variables as a result of the empirical distribution over topics of the documents, and the labels of the multiple annotators as noisy versions of that latent ground truth. We propose two different models, one for classification BIBREF12 and another for regression problems, thus covering a very wide range of possible practical applications, as we empirically demonstrate. Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text, images and video, by developing a multi-annotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex high-dimensional data from crowds.
Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference, we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets. We empirically show, using both simulated and real multiple-annotator labels obtained from AMT for popular text and image collections, that the proposed models are able to outperform other state-of-the-art approaches in both classification and regression tasks. We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart.
Supervised topic models
Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression.
From a classification perspective, there are several ways in which document classes can be included in LDA. The most natural one in this setting is probably the sLDA approach, since the classes are directly dependent on the empirical topic mixture distributions. This approach is coherent with the generative perspective of LDA but, nevertheless, several discriminative alternatives also exist. For example, DiscLDA BIBREF14 introduces a class-dependent linear transformation on the topic mixture proportions of each document, such that the per-word topic assignments are drawn from linearly transformed mixture proportions. The class-specific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions. The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 .
An alternative way of including classes in LDA for supervision is the one proposed in the Labeled-LDA model BIBREF15 . Labeled-LDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set. While this allows for multiple labels per document, it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels.
From a regression perspective, other than sLDA, the most relevant approaches are the Dirichlet-multimonial regression BIBREF16 and the inverse regression topic models BIBREF17 . The Dirichlet-multimonial regression (DMR) topic model BIBREF16 includes a log-linear prior on the document's mixture proportions that is a function of a set of arbitrary features, such as author, date, publication venue or references in scientific articles. The inferred Dirichlet-multinomial distribution can then be used to make predictions about the values of theses features. The inverse regression topic model (IRTM) BIBREF17 is a mixed-membership extension of the multinomial inverse regression (MNIR) model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis. However, this results in a rather complex and inefficient inference procedure. Furthermore, making predictions in the IRTM is not trivial. For example, MAP estimates of targets will be in a different scale than the original document's metadata. Hence, the authors propose the use of a linear model to regress metadata values onto their MAP predictions.
The approaches discussed so far rely on likelihood-based estimation procedures. The work in BIBREF7 contrasts with these approaches by proposing MedLDA, a supervised topic model that utilizes the max-margin principle for estimation. Despite its margin-based advantages, MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions. On the contrary, in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics.
Learning from multiple annotators
Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier.
While earlier works usually focused on estimating the ground truth and the error rates of different annotators, recent works are more focused on the problem of learning classifiers using multiple-annotator data. This idea was explored by Raykar et al. BIBREF21 , who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier, by modeling the ground truth labels as latent variables. This work was later extended in BIBREF11 by considering the dependencies of the annotators' labels on the instances they are labeling, and also in BIBREF22 through the use of Gaussian process classifiers. The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables. However, it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators. Furthermore, it considers the problems of learning a low-dimensional representation of the input data (through topic modeling) and modeling the answers of multiple annotators jointly, providing an efficient stochastic variational inference algorithm.
Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators, for continuous response variables this problem has been approached in a much smaller extent. For example, Groot et al. BIBREF23 address this problem in the context of Gaussian processes. In their work, the authors assign a different variance to the likelihood of the data points provided by the different annotators, thereby allowing them to have different noise levels, which can be estimated by maximizing the marginal likelihood of the data. Similarly, the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators. In this article, we take this idea one step further by also considering a per-annotator bias parameter, which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common, for example, in product ratings and document reviews. Furthermore, we empirically validate the proposed model using real multi-annotator data obtained from Amazon Mechanical Turk. This contrasts with the previously mentioned works, which rely only on simulated annotators.
Classification model
In this section, we develop a multi-annotator supervised topic model for classification problems. The model for regression settings will be presented in Section SECREF5 . We start by deriving a (batch) variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters. We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents. Finally, we show how to use the learned model to classify new documents.
Proposed model
Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 , where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators. We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics, each being defined as a distribution over the words in a vocabulary, as in LDA. In LDA, the INLINEFORM5 word, INLINEFORM6 , in a document INLINEFORM7 is provided a discrete topic-assignment INLINEFORM8 , which is drawn from the documents' distribution over topics INLINEFORM9 . This allows us to build lower-dimensional representations of documents, which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topic-assignment of the words in the document, INLINEFORM11 , and applying a softmax function in order to obtain a distribution over classes. Alternatively, one could consider more flexible models such as Gaussian processes, however that would considerably increase the complexity of inference.
Unfortunately, a direct mapping between document classes and the labels provided by the different annotators in a multiple-annotator setting would correspond to assuming that they are all equally reliable, an assumption that is violated in practice, as previous works clearly demonstrate (e.g. BIBREF8 , BIBREF9 ). Hence, we assume the existence of a latent ground truth class, and model the labels from the different annotators using a noise model that states that, given a true class INLINEFORM0 , each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 . Hence, by modeling the matrix INLINEFORM4 we are in fact modeling a per-annotator (normalized) confusion matrix, which allows us to account for their different levels of expertise and correct their potential biases.
The generative process of the proposed model for classification problems can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) class INLINEFORM0
For each annotator INLINEFORM0
Draw annotator's label INLINEFORM0
where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document, INLINEFORM2 , and the softmax is given by DISPLAYFORM0
Fig. FIGREF20 shows a graphical model representation of the proposed model, where INLINEFORM0 denotes the number of topics, INLINEFORM1 is the number of classes, INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 . Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters. Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity. Similarly, instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 , we place a Dirichlet prior over these variables and perform approximate Bayesian inference. This contrasts with previous works on learning classification models from crowds BIBREF21 , BIBREF24 .
For developing a multi-annotator supervised topic model for regression, we shall follow a similar intuition as the one we considered for classification. Namely, we shall assume that, for a given document INLINEFORM0 , each annotator provides a noisy version, INLINEFORM1 , of the true (continuous) target variable, which we denote by INLINEFORM2 . This can be, for example, the true rating of a product or the true sentiment of a document. Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 (inverse variance), and assuming a Gaussian noise model for the annotators' answers, we have that DISPLAYFORM0
This approach is therefore more powerful than previous works BIBREF21 , BIBREF23 , where a single precision parameter was used to model the annotators' expertise. Fig. FIGREF45 illustrates this intuition for 4 annotators, represented by different colors. The “green annotator" is the best one, since he is right on the target and his answers vary very little (low bias, high precision). The “yellow annotator" has a low bias, but his answers are very uncertain, as they can vary a lot. Contrarily, the “blue annotator" is very precise, but consistently over-estimates the true target (high bias, high precision). Finally, the “red annotator" corresponds to the worst kind of annotator (with high bias and low precision).
Having specified a model for annotators answers given the true targets, the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 . For this, we shall keep things simple and assume a linear model as in sLDA BIBREF6 . The generative process of the proposed model for continuous target variables can then be summarized as follows:
For each annotator INLINEFORM0
For each class INLINEFORM0
Draw reliability parameter INLINEFORM0
For each topic INLINEFORM0
Draw topic distribution INLINEFORM0
For each document INLINEFORM0
Draw topic proportions INLINEFORM0
For the INLINEFORM0 word
Draw topic assignment INLINEFORM0
Draw word INLINEFORM0
Draw latent (true) target INLINEFORM0
For each annotator INLINEFORM0
Draw answer INLINEFORM0
Fig. FIGREF60 shows a graphical representation of the proposed model.
Approximate inference
Given a dataset INLINEFORM0 , the goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM1 , the per-word topic assignments INLINEFORM2 , the per-topic distribution over words INLINEFORM3 , the per-document latent true class INLINEFORM4 , and the per-annotator confusion parameters INLINEFORM5 . As with LDA, computing the exact posterior distribution of the latent variables is computationally intractable. Hence, we employ mean-field variational inference to perform approximate Bayesian inference.
Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution. We assume a fully-factorized (mean-field) variational distribution of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are variational parameters. Table TABREF23 shows the correspondence between variational parameters and the original parameters.
Let INLINEFORM0 denote the model parameters. Following BIBREF25 , the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0
which we maximize using coordinate ascent.
Optimizing INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0
The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator, INLINEFORM2 , and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions, respectively. Taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 and setting them to zero, yields the following update DISPLAYFORM0
Similarly, the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
where INLINEFORM0 . Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives w.r.t. INLINEFORM2 to zero gives the following update DISPLAYFORM0
Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators “weighted" by their expected log probability INLINEFORM3 .
The main difficulty of applying standard variational inference methods to the proposed model is the non-conjugacy between the distribution of the mean topic-assignment INLINEFORM0 and the softmax. Namely, in the expectation DISPLAYFORM0
the second term is intractable to compute. We can make progress by applying Jensen's inequality to bound it as follows DISPLAYFORM0
where INLINEFORM0 , which is constant w.r.t. INLINEFORM1 . This local variational bound can be made tight by noticing that INLINEFORM2 , where equality holds if and only if INLINEFORM3 . Hence, given the current parameter estimates INLINEFORM4 , if we set INLINEFORM5 and INLINEFORM6 then, for an individual parameter INLINEFORM7 , we have that DISPLAYFORM0
Using this local bound to approximate the expectation of the log-sum-exp term, and taking derivatives of the evidence lower bound w.r.t. INLINEFORM0 with the constraint that INLINEFORM1 , yields the following fix-point update DISPLAYFORM0
where INLINEFORM0 denotes the size of the vocabulary. Notice how the per-word variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 .
The variational inference algorithm iterates between Eqs. EQREF25 - EQREF33 until the evidence lower bound, Eq. EQREF24 , converges. Additional details are provided as supplementary material.
The goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM0 , the per-word topic assignments INLINEFORM1 , the per-topic distribution over words INLINEFORM2 and the per-document latent true targets INLINEFORM3 . As we did for the classification model, we shall develop a variational inference algorithm using coordinate ascent. The lower-bound on the log marginal likelihood is now given by DISPLAYFORM0
where INLINEFORM0 are the model parameters. We assume a fully-factorized (mean-field) variational distribution INLINEFORM1 of the form DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are the variational parameters. Notice the new Gaussian term, INLINEFORM5 , corresponding to the approximate posterior distribution of the unobserved true targets.
Optimizing the variational objective INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs. EQREF25 and . Optimizing w.r.t. INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0
where we defined INLINEFORM0 . Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution, which is given by INLINEFORM1 .
The only variables left for doing inference on are then the latent true targets INLINEFORM0 . The variational distribution of INLINEFORM1 is governed by two parameters: a mean INLINEFORM2 and a variance INLINEFORM3 . Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0
Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0
Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be, and the bias-corrected answers of the different annotators weighted by their individual precisions.
As for INLINEFORM0 , we can optimize INLINEFORM1 w.r.t. INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0
and taking derivatives, yielding the update DISPLAYFORM0
Parameter estimation
The model parameters are INLINEFORM0 . The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyper-parameters of the proposed model. As with many works on topic models (e.g. BIBREF26 , BIBREF2 ), we assume hyper-parameters to be fixed, since they can be effectively selected by grid-search procedures which are able to explore well the parameter space without suffering from local optima. Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm. Therefore, in the E-step we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables, and in the M-step we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 . Unfortunately, taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 does not yield a closed-form solution. Hence, we use a numerical method, namely L-BFGS BIBREF27 , to find an optimum. The objective function and gradients are given by DISPLAYFORM0
where, for convenience, we defined the following variable: INLINEFORM0 .
The parameters of the proposed regression model are INLINEFORM0 . As we did for the classification model, we shall assume the Dirichlet parameters, INLINEFORM1 and INLINEFORM2 , to be fixed. Similarly, we shall assume that the variance of the true targets, INLINEFORM3 , to be constant. The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases, INLINEFORM5 , and precisions, INLINEFORM6 , which we estimate using variational Bayesian EM.
Since the latent true targets are now linear functions of the documents' empirical topic mixtures (i.e. there is no softmax function), we can find a closed form solution for the regression coefficients INLINEFORM0 . Taking derivatives of INLINEFORM1 w.r.t. INLINEFORM2 and setting them to zero, gives the following solution for INLINEFORM3 DISPLAYFORM0
where DISPLAYFORM0
We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood. The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0
Taking derivatives w.r.t. INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0
Similarly, we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0
The maximum likelihood estimate for the precision (inverse variance) of the INLINEFORM0 annotator is then given by DISPLAYFORM0
Given a set of fitted parameters, it is then straightforward to make predictions for new documents: it is just necessary to infer the (approximate) posterior distribution over the word-topic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA (Eqs. EQREF25 and EQREF42 ), and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 .
Stochastic variational inference
In Section SECREF21 , we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model. This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure, and estimating the global hidden variables. However, this can be inefficient for large datasets, since it requires a full pass through the data at each iteration before updating the global variables. In this section, we develop a stochastic variational inference algorithm BIBREF13 , which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 .
Based on the theory of stochastic optimization BIBREF28 , we can find unbiased estimates of the gradients by subsampling a document (or a mini-batch of documents) from the corpus, and using it to compute the gradients as if that document was observed INLINEFORM0 times. Hence, given an uniformly sampled document INLINEFORM1 , we use the current posterior distributions of the global latent variables, INLINEFORM2 and INLINEFORM3 , and the current coefficient estimates INLINEFORM4 , to compute the posterior distribution over the local hidden variables INLINEFORM5 , INLINEFORM6 and INLINEFORM7 using Eqs. EQREF25 , EQREF33 and EQREF29 respectively. These posteriors are then used to update the global variational parameters, INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients.
Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model. Given an appropriate schedule for the learning rates INLINEFORM0 , such that INLINEFORM1 and INLINEFORM2 , the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 .
[t] Stochastic variational inference for the proposed classification model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 Set t = t + 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq. EQREF33 , for INLINEFORM8 Compute INLINEFORM9 using Eq. EQREF25 Compute INLINEFORM10 using Eq. EQREF29 local parameters INLINEFORM11 , INLINEFORM12 and INLINEFORM13 converge Compute step-size INLINEFORM14 Update topics variational parameters DISPLAYFORM0
Update annotators confusion parameters DISPLAYFORM0
global convergence criterion is met
As we did for the classification model from Section SECREF4 , we can envision developing a stochastic variational inference for the proposed regression model. In this case, the only “global" latent variables are the per-topic distributions over words INLINEFORM0 . As for the “local" latent variables, instead of a single variable INLINEFORM1 , we now have two variables per-document: INLINEFORM2 and INLINEFORM3 . The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 . For added efficiency, one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 , by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the step-size INLINEFORM6 .
[t] Stochastic variational inference for the proposed regression model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 Set t = t + 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq. EQREF64 , for INLINEFORM9 Compute INLINEFORM10 using Eq. EQREF25 Compute INLINEFORM11 using Eq. EQREF66 Compute INLINEFORM12 using Eq. EQREF68 local parameters INLINEFORM13 , INLINEFORM14 and INLINEFORM15 converge Compute step-size INLINEFORM16 Update topics variational parameters DISPLAYFORM0
global convergence criterion is met
Document classification
In order to make predictions for a new (unlabeled) document INLINEFORM0 , we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 . This can be achieved by dropping the terms that involve INLINEFORM3 , INLINEFORM4 and INLINEFORM5 from the model's joint distribution (since, at prediction time, the multi-annotator labels are no longer observed) and averaging over the estimated topics distributions. Letting the topics distribution over words inferred during training be INLINEFORM6 , the joint distribution for a single document is now simply given by DISPLAYFORM0
Deriving a mean-field variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixed-point updates as in LDA BIBREF0 for INLINEFORM1 (Eq. EQREF25 ) and INLINEFORM2 DISPLAYFORM0
Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training, we can make predictions as follows DISPLAYFORM0
This is equivalent to making predictions in the classification version of sLDA BIBREF2 .
Regression model
In this section, we develop a variant of the model proposed in Section SECREF4 for regression problems. We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables. Next, we present a variational inference algorithm, highlighting the differences to the classification version. Finally, we show how to optimize the model parameters.
Experiments
In this section, the proposed multi-annotator supervised LDA models for classification and regression (MA-sLDAc and MA-sLDAr, respectively) are validated using both simulated annotators on popular corpora and using real multiple-annotator labels obtained from Amazon Mechanical Turk. Namely, we shall consider the following real-world problems: classifying posts and news stories; classifying images according to their content; predicting number of stars that a given user gave to a restaurant based on the review; predicting movie ratings using the text of the reviews.
Classification
In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers", “science", “politics" and “recreative". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
The different annotators were simulated by sampling their answers from a multinomial distribution, where the parameters are given by the lines of the annotators' confusion matrices. Hence, for each annotator INLINEFORM0 , we start by pre-defining a confusion matrix INLINEFORM1 with elements INLINEFORM2 , which correspond to the probability that the annotators' answer is INLINEFORM3 given that the true label is INLINEFORM4 , INLINEFORM5 . Then, the answers are sampled i.i.d. from INLINEFORM6 . This procedure was used to simulate 5 different annotators with the following accuracies: 0.737, 0.468, 0.284, 0.278, 0.260. In this experiment, no repeated labelling was used. Hence, each annotator only labels roughly one-fifth of the data. When compared to the ground truth, the simulated answers revealed an accuracy of 0.405. See Table TABREF81 for an overview of the details of the classification datasets used.
Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:
[itemsep=0.02cm]
LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .
LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.
LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.
Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).
sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.
For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.
In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .
The Reuters-21578 is a collection of manually categorized newswire stories with labels such as Acquisitions, Crude-oil, Earnings or Grain. For this experiment, only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label. This resulted in a total of 7016 documents distributed among 8 classes. Of these, 1800 documents were submitted to AMT for multiple annotators to label, giving an average of approximately 3 answers per document (see Table TABREF81 for further details). The remaining 5216 documents were used for testing. The collected answers yield an average worker accuracy of 56.8%. Applying majority voting to these answers reveals a ground truth accuracy of 71.0%. Fig. FIGREF90 shows the boxplots of the number of answers per worker and their accuracies. Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers.
The results obtained by the different approaches are given in Fig. FIGREF91 , where it can be seen that the proposed model (MA-sLDAc) outperforms all the other approaches. For this dataset, the svi algorithm is using mini-batches of 300 documents.
The proposed model was also validated using a dataset from the computer vision domain: LabelMe BIBREF31 . In contrast to the Reuters and Newsgroups corpora, LabelMe is an open online tool to annotate images. Hence, this experiment allows us to see how the proposed model generalizes beyond non-textual data. Using the Matlab interface provided in the projects' website, we extracted a subset of the LabelMe data, consisting of all the 256 x 256 images with the categories: “highway", “inside city", “tall building", “street", “forest", “coast", “mountain" or “open country". This allowed us to collect a total of 2688 labeled images. Of these, 1000 images were given to AMT workers to classify with one of the classes above. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. When majority voting is applied to the collected answers, a ground truth accuracy of 76.9% is obtained. Fig. FIGREF92 shows the boxplots of the number of answers per worker and their accuracies. Interestingly, the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters-21578 data (see Fig. FIGREF90 ), which suggests that this is an easier task for the AMT workers.
The preprocessing of the images used is similar to the approach in BIBREF1 . It uses 128-dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel. This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels. The 128-dimensional SIFT descriptors produced by the sliding window are then fed to a k-means algorithm (with k=200) in order construct a vocabulary of 200 “visual words". This allows us to represent the images with a bag of visual words model.
With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:
Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).
The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.
Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.
In order to verify that the proposed model was estimating the (normalized) confusion matrices INLINEFORM0 of the different workers correctly, a random sample of them was plotted against the true confusion matrices (i.e. the normalized confusion matrices evaluated against the true labels). Figure FIGREF95 shows the results obtained with 60 topics on the Reuters-21578 dataset, where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 (the supplementary material provides a similar figure for the LabelMe dataset). Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make, with several workers showing clear biases (e.g. workers 3 and 4), while others made mistakes more randomly (e.g. worker 1). Nevertheless, the proposed is able to capture these patterns correctly and account for effect.
To gain further insights, Table TABREF96 shows 4 example images from the LabelMe dataset, along with their true labels, the answers provided by the different workers, the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator ( INLINEFORM0 for INLINEFORM1 ) using a color-coding scheme similar to Fig. FIGREF95 . In the first example, although majority voting suggests “inside city" to be the correct label, we can see that the model has learned that annotators 32 and 43 are very likely to provide the label “inside city" when the true label is actually “street", and it is able to leverage that fact to infer that the correct label is “street". Similarly, in the second image the model is able to infer the correct true label from 3 conflicting labels. However, in the third image the model is not able to recover the correct true class, which can be explained by it not having enough evidence about the annotators and their reliabilities and biases (likelihood distribution for these cases is uniform). In fact, this raises interesting questions regarding requirements for the minimum number of labels per annotator, their reliabilities and their coherence. Finally, for the fourth image, somehow surprisingly, the model is able to infer the correct true class, even though all 3 annotators labeled it as “inside city".
Regression
As for proposed classification model, we start by validating MA-sLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict. For this purpose, we shall consider a dataset of user-submitted restaurant reviews from the website we8there.com. This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews. For each review, there is a five-star rating on four specific aspects of quality (food, service, value, and atmosphere) as well as the overall experience. Our goal is then to predict the overall experience of the user based on his comments in the review. We apply the same preprocessing as in BIBREF18 , which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews. The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.
As with the classification model, we seek to simulate an heterogeneous set of annotators in terms of reliability and bias. Hence, in order to simulate an annotator INLINEFORM0 , we proceed as follows: let INLINEFORM1 be the true review of the restaurant; we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers, depending on what type of annotator we wish to simulate (see Fig. FIGREF45 ); we then sample a simulated answer as INLINEFORM4 . Using this procedure, we simulated 5 annotators with the following (bias, precision) pairs: (0.1, 10), (-0.3, 3), (-2.5, 10), (0.1, 0.5) and (1, 0.25). The goal is to have 2 good annotators (low bias, high precision), 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased. The coefficients of determination ( INLINEFORM5 ) of the simulated annotators are: [0.940, 0.785, -2.469, -0.131, -1.749]. Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0.798. Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments.
We compare the proposed model (MA-sLDAr) with the two following baselines:
[itemsep=0.02cm]
LDA + LinReg (mean): This baseline corresponds to applying unsupervised LDA to the data, and learning a linear regression model on the inferred topics distributions of the documents. The answers from the different annotators were aggregated by computing the mean.
sLDA (mean): This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators' answers.
Fig. FIGREF102 shows the results obtained for different numbers of topics. Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm, we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation. Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference (svi) algorithm, we only made experiments using the batch algorithm developed in Section SECREF61 . The results obtained clearly show the improved performance of MA-sLDAr over the other methods.
The proposed multi-annotator regression model (MA-sLDAr) was also validated with real annotators by using AMT. For that purpose, the movie review dataset from BIBREF35 was used. This dataset consists of 5006 movie reviews along with their respective star rating (from 1 to 10). The goal of this experiment is then predict how much a person liked a movie based on what she says about it. We ask workers to guess how much they think the writer of the review liked the movie based on her comments. An average of 4.96 answers per-review was collected for a total of 1500 reviews. The remaining reviews were used for testing. In average, each worker rated approximately 55 reviews. Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0.830. Table TABREF99 gives an overview of the statistics of this data. Fig. FIGREF104 shows boxplots of the number of answers per worker, as well as boxplots of their respective biases ( INLINEFORM1 ) and variances (inverse precisions, INLINEFORM2 ).
The preprocessing of the text consisted of stemming and stop-words removal. Using the preprocessed data, the proposed MA-sLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 . Fig. FIGREF105 shows the results obtained for different numbers of topics. These results show that the proposed model outperforms all the other baselines.
With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly, we plotted the true values against the estimates of MA-sLDAr with 60 topics for a random subset of 10 workers. Fig. FIGREF106 shows the obtained results, where higher color intensities indicate higher values. Ideally, the colour of two horizontally-adjacent squares would then be of similar shades, and this is indeed what happens in practice for the majority of the workers, as Fig. FIGREF106 shows. Interestingly, the figure also shows that there are a couple of workers that are considerably biased (e.g. workers 6 and 8) and that those biases are being correctly estimated, thus justifying the inclusion of a bias parameter in the proposed model, which contrasts with previous works BIBREF21 , BIBREF23 .
Conclusion
This article proposed a supervised topic model that is able to learn from multiple annotators and crowds, by accounting for their biases and different levels of expertise. Given the large sizes of modern datasets, and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates, generally involve complex high-dimensional data such as text and images, the proposed model constitutes a strong contribution for the multi-annotator paradigm. This model is then capable of jointly modeling the words in documents as arising from a mixture of topics, as well as the latent true target variables and the (noisy) answers of the multiple annotators. We developed two distinct models, one for classification and another for regression, which share similar intuitions but that inevitably differ due to the nature of the target variables. We empirically showed, using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform state-of-the-art approaches in several real-world problems, such as classifying posts, news stories and images, or predicting the number of stars of restaurant and the rating of movie based on their reviews. For this, we use various popular datasets from the state-of-the-art, that are commonly used for benchmarking machine learning algorithms. Finally, an efficient stochastic variational inference algorithm was described, which gives the proposed models the ability to scale to large datasets.
Acknowledgment
The Fundação para a Ciência e Tecnologia (FCT) is gratefully acknowledged for founding this work with the grants SFRH/BD/78396/2011 and PTDC/ECM-TRA/1898/2012 (InfoCROWDS).
[]Mariana Lourenço has a MSc degree in Informatics Engineering from University of Coimbra, Portugal. Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city. Her main research interests are machine learning, pattern recognition and natural language processing.
[]Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department, University of Coimbra in Portugal, from where she received a D.Sc. in Informatics Engineering, a Ph.D. in Electrical Engineering, speciality of Informatics, and a MSc in Computer Science. Her research interests are in the areas of Machine Learning, Pattern Recognition and Signal Processing and their applications to a broad range of fields. She was responsible/participated in several research projects in a wide range of application areas such as Text Classification, Financial, Biomedical and Bioinformatics. Bernardete Ribeiro is IEEE Senior Member, and member of IARP International Association of Pattern Recognition and ACM.
[]Francisco C. Pereira is Full Professor at the Technical University of Denmark (DTU), where he leads the Smart Mobility research group. His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior, and modeling and optimizing the transportation system as a whole. He has Master€™s (2000) and Ph.D. (2005) degrees in Computer Science from University of Coimbra, and has authored/co-authored over 70 journal and conference papers in areas such as pattern recognition, transportation, knowledge based systems and cognitive science. Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra. He was awarded several prestigious prizes, including an IEEE Achievements award, in 2009, the Singapore GYSS Challenge in 2013, and the Pyke Johnson award from Transportation Research Board, in 2015. | 20-Newsgroups benchmark corpus , Reuters-21578, LabelMe |
2376c170c343e2305dac08ba5f5bda47c370357f | 2376c170c343e2305dac08ba5f5bda47c370357f_0 | Q: How was the dataset collected?
Text: Introduction
Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.
Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.
In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):
The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.
It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).
Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.
In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ.
Related Work
According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.
Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).
Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.
To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.
To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available.
Data Collection
Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:
Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.
Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
Data Collection ::: Database Construction
We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as "find another attraction near the first one" and "find a restaurant near the attraction". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain.
Data Collection ::: Goal Generation
To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.
There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.
Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain "nearby hotels", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\mathcal {P}_{attraction\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\mathcal {P}_{attraction\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\mathcal {P}_{hotel\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.
Third, we generate sub-goals in the metro and taxi domains. With the probability of $\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\mathcal {P}_{metro}=\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.
Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.
To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.
Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context.
Data Collection ::: Dialogue Collection
We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.
In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues.
Data Collection ::: Dialogue Collection ::: User Side
The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue.
Data Collection ::: Dialogue Collection ::: Wizard Side
We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system.
Data Collection ::: Dialogue Annotation
After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the "nearby" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as "goodbye", "thanks" at both user and system sides, keyword matching is applied.
We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.
To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations.
Statistics
After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.
According to the type of user goal, we group the dialogues in the training set into five categories:
417 dialogues have only one sub-goal in HAR domains.
1573 dialogues have multiple sub-goals (2$\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.
691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.
1,759 dialogues have multiple sub-goals (2$\sim $5) in HAR domains with cross-domain informable slots.
572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals).
The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.
CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more "NoOffer" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by "NoOffer rate", "Multi-query rate", and "Goal change rate" in Table TABREF26. In addition, "Multi-query rate" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.
The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals.
Corpus Features
Our corpus is unique in the following aspects:
Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.
A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.
Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily.
Benchmark and Analysis
CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus.
Benchmark and Analysis ::: Natural Language Understanding
Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.
Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings ("free") as input and outputs tags in BIO schema ("B-Inform-Attraction-fee"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.
Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of "General" intent and the increase of "NoOffer" as well as "Select" intent in the dialogue data. We also noted that the F1 score of "Select" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively.
Benchmark and Analysis ::: Dialogue State Tracking
Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.
Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the "fee" slot in the attraction domain will be filled with "free". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.
Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive "Select" intent from users (e.g., "Find a hotel near the attraction"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well.
Benchmark and Analysis ::: Dialogue Policy Learning
Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.
Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.
Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is "Select" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition.
Benchmark and Analysis ::: Natural Language Generation
Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.
Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.
Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable.
Benchmark and Analysis ::: User Simulator
Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the "Usr Policy" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.
Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the "fee" slot in the user state with "free", and ask for the next empty slot such as "address". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.
Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts.
Benchmark and Analysis ::: Evaluation with User Simulation
In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:
Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.
Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.
Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.
When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as "task finish". It's worth noting that "task finish" does not mean the task is success, because the system may provide wrong information. We calculated "task finish rate" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:
Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.
The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.
TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set.
Conclusion
In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system.
Acknowledgments
This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback. | Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. , Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context., Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states., Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. |
2376c170c343e2305dac08ba5f5bda47c370357f | 2376c170c343e2305dac08ba5f5bda47c370357f_1 | Q: How was the dataset collected?
Text: Introduction
Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.
Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.
In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):
The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.
It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).
Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.
In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ.
Related Work
According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.
Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).
Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.
To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.
To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available.
Data Collection
Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:
Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.
Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
Data Collection ::: Database Construction
We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as "find another attraction near the first one" and "find a restaurant near the attraction". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain.
Data Collection ::: Goal Generation
To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.
There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.
Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain "nearby hotels", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\mathcal {P}_{attraction\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\mathcal {P}_{attraction\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\mathcal {P}_{hotel\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.
Third, we generate sub-goals in the metro and taxi domains. With the probability of $\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\mathcal {P}_{metro}=\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.
Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.
To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.
Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context.
Data Collection ::: Dialogue Collection
We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.
In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues.
Data Collection ::: Dialogue Collection ::: User Side
The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue.
Data Collection ::: Dialogue Collection ::: Wizard Side
We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system.
Data Collection ::: Dialogue Annotation
After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the "nearby" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as "goodbye", "thanks" at both user and system sides, keyword matching is applied.
We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.
To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations.
Statistics
After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.
According to the type of user goal, we group the dialogues in the training set into five categories:
417 dialogues have only one sub-goal in HAR domains.
1573 dialogues have multiple sub-goals (2$\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.
691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.
1,759 dialogues have multiple sub-goals (2$\sim $5) in HAR domains with cross-domain informable slots.
572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals).
The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.
CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more "NoOffer" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by "NoOffer rate", "Multi-query rate", and "Goal change rate" in Table TABREF26. In addition, "Multi-query rate" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.
The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals.
Corpus Features
Our corpus is unique in the following aspects:
Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.
A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.
Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily.
Benchmark and Analysis
CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus.
Benchmark and Analysis ::: Natural Language Understanding
Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.
Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings ("free") as input and outputs tags in BIO schema ("B-Inform-Attraction-fee"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.
Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of "General" intent and the increase of "NoOffer" as well as "Select" intent in the dialogue data. We also noted that the F1 score of "Select" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively.
Benchmark and Analysis ::: Dialogue State Tracking
Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.
Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the "fee" slot in the attraction domain will be filled with "free". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.
Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive "Select" intent from users (e.g., "Find a hotel near the attraction"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well.
Benchmark and Analysis ::: Dialogue Policy Learning
Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.
Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.
Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is "Select" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition.
Benchmark and Analysis ::: Natural Language Generation
Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.
Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.
Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable.
Benchmark and Analysis ::: User Simulator
Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the "Usr Policy" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.
Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the "fee" slot in the user state with "free", and ask for the next empty slot such as "address". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.
Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts.
Benchmark and Analysis ::: Evaluation with User Simulation
In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:
Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.
Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.
Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.
When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as "task finish". It's worth noting that "task finish" does not mean the task is success, because the system may provide wrong information. We calculated "task finish rate" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:
Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.
The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.
TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set.
Conclusion
In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system.
Acknowledgments
This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback. | They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. |
0137ecebd84a03b224eb5ca51d189283abb5f6d9 | 0137ecebd84a03b224eb5ca51d189283abb5f6d9_0 | Q: What are the benchmark models?
Text: Introduction
Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.
Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.
In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):
The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.
It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).
Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.
In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ.
Related Work
According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.
Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).
Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.
To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.
To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available.
Data Collection
Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:
Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.
Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
Data Collection ::: Database Construction
We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as "find another attraction near the first one" and "find a restaurant near the attraction". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain.
Data Collection ::: Goal Generation
To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.
There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.
Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain "nearby hotels", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\mathcal {P}_{attraction\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\mathcal {P}_{attraction\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\mathcal {P}_{hotel\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.
Third, we generate sub-goals in the metro and taxi domains. With the probability of $\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\mathcal {P}_{metro}=\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.
Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.
To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.
Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context.
Data Collection ::: Dialogue Collection
We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.
In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues.
Data Collection ::: Dialogue Collection ::: User Side
The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue.
Data Collection ::: Dialogue Collection ::: Wizard Side
We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system.
Data Collection ::: Dialogue Annotation
After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the "nearby" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as "goodbye", "thanks" at both user and system sides, keyword matching is applied.
We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.
To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations.
Statistics
After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.
According to the type of user goal, we group the dialogues in the training set into five categories:
417 dialogues have only one sub-goal in HAR domains.
1573 dialogues have multiple sub-goals (2$\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.
691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.
1,759 dialogues have multiple sub-goals (2$\sim $5) in HAR domains with cross-domain informable slots.
572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals).
The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.
CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more "NoOffer" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by "NoOffer rate", "Multi-query rate", and "Goal change rate" in Table TABREF26. In addition, "Multi-query rate" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.
The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals.
Corpus Features
Our corpus is unique in the following aspects:
Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.
A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.
Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily.
Benchmark and Analysis
CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus.
Benchmark and Analysis ::: Natural Language Understanding
Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.
Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings ("free") as input and outputs tags in BIO schema ("B-Inform-Attraction-fee"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.
Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of "General" intent and the increase of "NoOffer" as well as "Select" intent in the dialogue data. We also noted that the F1 score of "Select" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively.
Benchmark and Analysis ::: Dialogue State Tracking
Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.
Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the "fee" slot in the attraction domain will be filled with "free". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.
Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive "Select" intent from users (e.g., "Find a hotel near the attraction"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well.
Benchmark and Analysis ::: Dialogue Policy Learning
Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.
Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.
Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is "Select" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition.
Benchmark and Analysis ::: Natural Language Generation
Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.
Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.
Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable.
Benchmark and Analysis ::: User Simulator
Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the "Usr Policy" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.
Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the "fee" slot in the user state with "free", and ask for the next empty slot such as "address". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.
Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts.
Benchmark and Analysis ::: Evaluation with User Simulation
In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:
Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.
Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.
Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.
When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as "task finish". It's worth noting that "task finish" does not mean the task is success, because the system may provide wrong information. We calculated "task finish rate" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:
Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.
The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.
TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set.
Conclusion
In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system.
Acknowledgments
This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback. | BERTNLU from ConvLab-2, a rule-based model (RuleDST) , TRADE (Transferable Dialogue State Generator) , a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy) |
5f6fbd57cce47f20a0fda27d954543c00c4344c2 | 5f6fbd57cce47f20a0fda27d954543c00c4344c2_0 | Q: How was the corpus annotated?
Text: Introduction
Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.
Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.
In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):
The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.
It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).
Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.
In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ.
Related Work
According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.
Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).
Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.
To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.
To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available.
Data Collection
Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:
Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.
Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
Data Collection ::: Database Construction
We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as "find another attraction near the first one" and "find a restaurant near the attraction". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain.
Data Collection ::: Goal Generation
To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.
There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.
Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain "nearby hotels", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\mathcal {P}_{attraction\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\mathcal {P}_{attraction\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\mathcal {P}_{hotel\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.
Third, we generate sub-goals in the metro and taxi domains. With the probability of $\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\mathcal {P}_{metro}=\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.
Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.
To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.
Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context.
Data Collection ::: Dialogue Collection
We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.
In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues.
Data Collection ::: Dialogue Collection ::: User Side
The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue.
Data Collection ::: Dialogue Collection ::: Wizard Side
We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system.
Data Collection ::: Dialogue Annotation
After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the "nearby" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as "goodbye", "thanks" at both user and system sides, keyword matching is applied.
We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.
To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations.
Statistics
After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.
According to the type of user goal, we group the dialogues in the training set into five categories:
417 dialogues have only one sub-goal in HAR domains.
1573 dialogues have multiple sub-goals (2$\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.
691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.
1,759 dialogues have multiple sub-goals (2$\sim $5) in HAR domains with cross-domain informable slots.
572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals).
The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.
CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more "NoOffer" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by "NoOffer rate", "Multi-query rate", and "Goal change rate" in Table TABREF26. In addition, "Multi-query rate" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.
The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals.
Corpus Features
Our corpus is unique in the following aspects:
Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.
A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.
Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily.
Benchmark and Analysis
CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus.
Benchmark and Analysis ::: Natural Language Understanding
Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.
Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings ("free") as input and outputs tags in BIO schema ("B-Inform-Attraction-fee"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.
Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of "General" intent and the increase of "NoOffer" as well as "Select" intent in the dialogue data. We also noted that the F1 score of "Select" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively.
Benchmark and Analysis ::: Dialogue State Tracking
Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.
Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the "fee" slot in the attraction domain will be filled with "free". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.
Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive "Select" intent from users (e.g., "Find a hotel near the attraction"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well.
Benchmark and Analysis ::: Dialogue Policy Learning
Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.
Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.
Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is "Select" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition.
Benchmark and Analysis ::: Natural Language Generation
Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.
Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.
Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable.
Benchmark and Analysis ::: User Simulator
Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the "Usr Policy" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.
Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the "fee" slot in the user state with "free", and ask for the next empty slot such as "address". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.
Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts.
Benchmark and Analysis ::: Evaluation with User Simulation
In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:
Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.
Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.
Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.
When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as "task finish". It's worth noting that "task finish" does not mean the task is success, because the system may provide wrong information. We calculated "task finish rate" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:
Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.
The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.
TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set.
Conclusion
In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system.
Acknowledgments
This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback. | The workers were also asked to annotate both user states and system states, we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories |
d6e2b276390bdc957dfa7e878de80cee1f41fbca | d6e2b276390bdc957dfa7e878de80cee1f41fbca_0 | Q: What models other than standalone BERT is new model compared to?
Text: Introduction
As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.
With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.
Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:
For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.
It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.
Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.
To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.
For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.
Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.
In summary, our contributions are as follows:
We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.
We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.
We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline.
Related Work
Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.
Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.
There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.
Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms.
Model ::: Form-Context Model
We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\text{form} \in \mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\text{context} \in \mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate
with parameters $w \in \mathbb {R}^{2d}, b \in \mathbb {R}$ and $\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.
The final representation of $w$ is then simply a weighted sum of form and context embeddings:
where $\alpha = g(v_{(w,C)}^\text{form}, v_{(w,C)}^\text{context})$ and $A$ is a $d\times d$ matrix that is learned during training.
While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way.
Model ::: Bertram
To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\mathbf {e} = e_1, \ldots , e_n$, we denote by $\textbf {h}_j^l(\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\mathbf {e}$ as input.
Given a word $w$ and a context $C = w_1, \ldots , w_n$ in which it occurs, let $\mathbf {t} = t_1, \ldots , t_{m}$ with $m \ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define
where $\mathbf {e} = e_{t_1}, \ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:
Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:
As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.
Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:
We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.
tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth
As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \cdot v^\text{context}_{(w,C)}$.
However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define
with $A^{\prime } \in \mathbb {R}^{d \times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as
where $\alpha = g(h^\text{form}_{(w,C)}, h^\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).
As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\mathcal {C} = \lbrace C_1, \ldots , C_m\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19.
Model ::: Training
Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\mathcal {C}$, Bertram is trained to minimize $\Vert e_w - v_{(w, \mathcal {C})}\Vert ^2$.
As training Bertram end-to-end requires much computation (processing a single training instance $(w,\mathcal {C})$ is as costly as processing an entire batch of $|\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:
We train only the form part, i.e. our loss for a single example $(w, \mathcal {C})$ is $\Vert e_w - v^\text{form}_{(w, \mathcal {C})} \Vert ^2$.
We train only the context part, minimizing $\Vert e_w - A \cdot v^\text{context}_{(w, \mathcal {C})} \Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.
We combine the pretrained form-only and context-only model and train all additional parameters.
Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed.
Generation of Rare Word Datasets
To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:
A model that has no understanding of rare words at all should perform close to 0%.
A model that perfectly understands rare words should be able to classify every instance correctly.
Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.
To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\mathcal {D}$ containing labelled instances $(\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set.
Generation of Rare Word Datasets ::: Dataset Splitting
We partition $\mathcal {D}$ into a train set $\mathcal {D}_\text{train}$ and a set of test candidates, $\mathcal {D}_\text{cand}$, with the latter containing all instances $(\mathbf {x},y) \in \mathcal {D}$ such that for at least one word $w$ in $\mathbf {x}$, $S(w) \ne \emptyset $. Additionally, we require that the training set consists of at least one third of the entire data.
Generation of Rare Word Datasets ::: Baseline Training
We finetune $M$ on $\mathcal {D}_\text{train}$. Let $(\mathbf {x}, y) \in \mathcal {D}_\text{train}$ where $\mathbf {x} = w_1, \ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:
We randomly replace 5% of all words in $\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.
As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.
We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance.
Generation of Rare Word Datasets ::: Test Set Generation
Let $p(y \mid \mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\mathbf {x}$, and let
be the model's prediction for input $\mathbf {x}$ where $\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\mathbf {x}, y) \in \mathcal {D}_\text{cand}$ with $M(\mathbf {x}) = y$. For each such entry, let $\mathbf {x} = w_1, \ldots , w_n$ and let $\mathbf {x}_{w_i = t}$ be the sequence obtained from $\mathbf {x}$ by replacing $w_i$ with $t$. We compute
i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\mathbf {x}_{w_i = \texttt {[MASK]}}) \ne y$ –, we select a random rare synonym $\hat{w}_i \in S(w_i)$ and add $(\mathbf {x}_{w_i = \hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ have the following properties:
If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.
If the model's internal representation of each $\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.
It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1.
Evaluation ::: Setup
For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.
While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference.
Evaluation ::: WNLaMPro
We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like
and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.
Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\text{base}$ model, our architecture even outperforms a standalone BERT$_\text{large}$ model by a large margin.
Evaluation ::: Downstream Task Datasets
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.
Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\mathbf {e} = e_1, \ldots , e_n$ where $e_{i}, \ldots , e_{i+j}$ with $1 \le i \le i+j \le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\mathbf {e}$ with
By default, the set of contexts $\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.
In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:
As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.
To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ where each substituted word $\hat{w}_{i_j}$ occurs less than $c_\text{max}$ times in WWC and BooksCorpus, for various values of $c_\text{max}$. As one would expect, $c_\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered.
Conclusion
We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.
As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts. | Only Bert base and Bert large are compared to proposed approach. |
32537fdf0d4f76f641086944b413b2f756097e5e | 32537fdf0d4f76f641086944b413b2f756097e5e_0 | Q: How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work?
Text: Introduction
As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.
With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.
Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:
For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.
It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.
Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.
To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.
For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.
Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.
In summary, our contributions are as follows:
We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.
We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.
We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline.
Related Work
Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.
Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.
There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.
Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms.
Model ::: Form-Context Model
We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\text{form} \in \mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\text{context} \in \mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate
with parameters $w \in \mathbb {R}^{2d}, b \in \mathbb {R}$ and $\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.
The final representation of $w$ is then simply a weighted sum of form and context embeddings:
where $\alpha = g(v_{(w,C)}^\text{form}, v_{(w,C)}^\text{context})$ and $A$ is a $d\times d$ matrix that is learned during training.
While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way.
Model ::: Bertram
To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\mathbf {e} = e_1, \ldots , e_n$, we denote by $\textbf {h}_j^l(\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\mathbf {e}$ as input.
Given a word $w$ and a context $C = w_1, \ldots , w_n$ in which it occurs, let $\mathbf {t} = t_1, \ldots , t_{m}$ with $m \ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define
where $\mathbf {e} = e_{t_1}, \ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:
Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:
As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.
Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:
We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.
tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth
As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \cdot v^\text{context}_{(w,C)}$.
However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define
with $A^{\prime } \in \mathbb {R}^{d \times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as
where $\alpha = g(h^\text{form}_{(w,C)}, h^\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).
As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\mathcal {C} = \lbrace C_1, \ldots , C_m\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19.
Model ::: Training
Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\mathcal {C}$, Bertram is trained to minimize $\Vert e_w - v_{(w, \mathcal {C})}\Vert ^2$.
As training Bertram end-to-end requires much computation (processing a single training instance $(w,\mathcal {C})$ is as costly as processing an entire batch of $|\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:
We train only the form part, i.e. our loss for a single example $(w, \mathcal {C})$ is $\Vert e_w - v^\text{form}_{(w, \mathcal {C})} \Vert ^2$.
We train only the context part, minimizing $\Vert e_w - A \cdot v^\text{context}_{(w, \mathcal {C})} \Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.
We combine the pretrained form-only and context-only model and train all additional parameters.
Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed.
Generation of Rare Word Datasets
To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:
A model that has no understanding of rare words at all should perform close to 0%.
A model that perfectly understands rare words should be able to classify every instance correctly.
Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.
To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\mathcal {D}$ containing labelled instances $(\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set.
Generation of Rare Word Datasets ::: Dataset Splitting
We partition $\mathcal {D}$ into a train set $\mathcal {D}_\text{train}$ and a set of test candidates, $\mathcal {D}_\text{cand}$, with the latter containing all instances $(\mathbf {x},y) \in \mathcal {D}$ such that for at least one word $w$ in $\mathbf {x}$, $S(w) \ne \emptyset $. Additionally, we require that the training set consists of at least one third of the entire data.
Generation of Rare Word Datasets ::: Baseline Training
We finetune $M$ on $\mathcal {D}_\text{train}$. Let $(\mathbf {x}, y) \in \mathcal {D}_\text{train}$ where $\mathbf {x} = w_1, \ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:
We randomly replace 5% of all words in $\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.
As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.
We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance.
Generation of Rare Word Datasets ::: Test Set Generation
Let $p(y \mid \mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\mathbf {x}$, and let
be the model's prediction for input $\mathbf {x}$ where $\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\mathbf {x}, y) \in \mathcal {D}_\text{cand}$ with $M(\mathbf {x}) = y$. For each such entry, let $\mathbf {x} = w_1, \ldots , w_n$ and let $\mathbf {x}_{w_i = t}$ be the sequence obtained from $\mathbf {x}$ by replacing $w_i$ with $t$. We compute
i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\mathbf {x}_{w_i = \texttt {[MASK]}}) \ne y$ –, we select a random rare synonym $\hat{w}_i \in S(w_i)$ and add $(\mathbf {x}_{w_i = \hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ have the following properties:
If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.
If the model's internal representation of each $\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.
It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1.
Evaluation ::: Setup
For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.
While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference.
Evaluation ::: WNLaMPro
We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like
and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.
Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\text{base}$ model, our architecture even outperforms a standalone BERT$_\text{large}$ model by a large margin.
Evaluation ::: Downstream Task Datasets
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.
Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\mathbf {e} = e_1, \ldots , e_n$ where $e_{i}, \ldots , e_{i+j}$ with $1 \le i \le i+j \le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\mathbf {e}$ with
By default, the set of contexts $\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.
In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:
As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.
To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ where each substituted word $\hat{w}_{i_j}$ occurs less than $c_\text{max}$ times in WWC and BooksCorpus, for various values of $c_\text{max}$. As one would expect, $c_\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered.
Conclusion
We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.
As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts. | improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking |
ef081d78be17ef2af792e7e919d15a235b8d7275 | ef081d78be17ef2af792e7e919d15a235b8d7275_0 | Q: What are three downstream task datasets?
Text: Introduction
As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.
With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.
Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:
For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.
It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.
Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.
To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.
For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.
Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.
In summary, our contributions are as follows:
We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.
We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.
We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline.
Related Work
Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.
Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.
There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.
Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms.
Model ::: Form-Context Model
We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\text{form} \in \mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\text{context} \in \mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate
with parameters $w \in \mathbb {R}^{2d}, b \in \mathbb {R}$ and $\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.
The final representation of $w$ is then simply a weighted sum of form and context embeddings:
where $\alpha = g(v_{(w,C)}^\text{form}, v_{(w,C)}^\text{context})$ and $A$ is a $d\times d$ matrix that is learned during training.
While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way.
Model ::: Bertram
To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\mathbf {e} = e_1, \ldots , e_n$, we denote by $\textbf {h}_j^l(\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\mathbf {e}$ as input.
Given a word $w$ and a context $C = w_1, \ldots , w_n$ in which it occurs, let $\mathbf {t} = t_1, \ldots , t_{m}$ with $m \ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define
where $\mathbf {e} = e_{t_1}, \ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:
Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:
As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.
Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:
We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.
tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth
As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \cdot v^\text{context}_{(w,C)}$.
However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define
with $A^{\prime } \in \mathbb {R}^{d \times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as
where $\alpha = g(h^\text{form}_{(w,C)}, h^\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).
As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\mathcal {C} = \lbrace C_1, \ldots , C_m\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19.
Model ::: Training
Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\mathcal {C}$, Bertram is trained to minimize $\Vert e_w - v_{(w, \mathcal {C})}\Vert ^2$.
As training Bertram end-to-end requires much computation (processing a single training instance $(w,\mathcal {C})$ is as costly as processing an entire batch of $|\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:
We train only the form part, i.e. our loss for a single example $(w, \mathcal {C})$ is $\Vert e_w - v^\text{form}_{(w, \mathcal {C})} \Vert ^2$.
We train only the context part, minimizing $\Vert e_w - A \cdot v^\text{context}_{(w, \mathcal {C})} \Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.
We combine the pretrained form-only and context-only model and train all additional parameters.
Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed.
Generation of Rare Word Datasets
To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:
A model that has no understanding of rare words at all should perform close to 0%.
A model that perfectly understands rare words should be able to classify every instance correctly.
Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.
To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\mathcal {D}$ containing labelled instances $(\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set.
Generation of Rare Word Datasets ::: Dataset Splitting
We partition $\mathcal {D}$ into a train set $\mathcal {D}_\text{train}$ and a set of test candidates, $\mathcal {D}_\text{cand}$, with the latter containing all instances $(\mathbf {x},y) \in \mathcal {D}$ such that for at least one word $w$ in $\mathbf {x}$, $S(w) \ne \emptyset $. Additionally, we require that the training set consists of at least one third of the entire data.
Generation of Rare Word Datasets ::: Baseline Training
We finetune $M$ on $\mathcal {D}_\text{train}$. Let $(\mathbf {x}, y) \in \mathcal {D}_\text{train}$ where $\mathbf {x} = w_1, \ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:
We randomly replace 5% of all words in $\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.
As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.
We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance.
Generation of Rare Word Datasets ::: Test Set Generation
Let $p(y \mid \mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\mathbf {x}$, and let
be the model's prediction for input $\mathbf {x}$ where $\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\mathbf {x}, y) \in \mathcal {D}_\text{cand}$ with $M(\mathbf {x}) = y$. For each such entry, let $\mathbf {x} = w_1, \ldots , w_n$ and let $\mathbf {x}_{w_i = t}$ be the sequence obtained from $\mathbf {x}$ by replacing $w_i$ with $t$. We compute
i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\mathbf {x}_{w_i = \texttt {[MASK]}}) \ne y$ –, we select a random rare synonym $\hat{w}_i \in S(w_i)$ and add $(\mathbf {x}_{w_i = \hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ have the following properties:
If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.
If the model's internal representation of each $\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.
It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1.
Evaluation ::: Setup
For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.
While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference.
Evaluation ::: WNLaMPro
We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like
and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.
Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\text{base}$ model, our architecture even outperforms a standalone BERT$_\text{large}$ model by a large margin.
Evaluation ::: Downstream Task Datasets
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.
Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\mathbf {e} = e_1, \ldots , e_n$ where $e_{i}, \ldots , e_{i+j}$ with $1 \le i \le i+j \le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\mathbf {e}$ with
By default, the set of contexts $\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.
In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:
As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.
To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ where each substituted word $\hat{w}_{i_j}$ occurs less than $c_\text{max}$ times in WWC and BooksCorpus, for various values of $c_\text{max}$. As one would expect, $c_\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered.
Conclusion
We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.
As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts. | MNLI BIBREF21, AG's News BIBREF22, DBPedia BIBREF23 |
ef081d78be17ef2af792e7e919d15a235b8d7275 | ef081d78be17ef2af792e7e919d15a235b8d7275_1 | Q: What are three downstream task datasets?
Text: Introduction
As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.
With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.
Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:
For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.
It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.
Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.
To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.
For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.
Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.
In summary, our contributions are as follows:
We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.
We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.
We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline.
Related Work
Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.
Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.
There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.
Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms.
Model ::: Form-Context Model
We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\text{form} \in \mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\text{context} \in \mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate
with parameters $w \in \mathbb {R}^{2d}, b \in \mathbb {R}$ and $\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.
The final representation of $w$ is then simply a weighted sum of form and context embeddings:
where $\alpha = g(v_{(w,C)}^\text{form}, v_{(w,C)}^\text{context})$ and $A$ is a $d\times d$ matrix that is learned during training.
While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way.
Model ::: Bertram
To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\mathbf {e} = e_1, \ldots , e_n$, we denote by $\textbf {h}_j^l(\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\mathbf {e}$ as input.
Given a word $w$ and a context $C = w_1, \ldots , w_n$ in which it occurs, let $\mathbf {t} = t_1, \ldots , t_{m}$ with $m \ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define
where $\mathbf {e} = e_{t_1}, \ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:
Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:
As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.
Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:
We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.
tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth
As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \cdot v^\text{context}_{(w,C)}$.
However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define
with $A^{\prime } \in \mathbb {R}^{d \times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as
where $\alpha = g(h^\text{form}_{(w,C)}, h^\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).
As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\mathcal {C} = \lbrace C_1, \ldots , C_m\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19.
Model ::: Training
Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\mathcal {C}$, Bertram is trained to minimize $\Vert e_w - v_{(w, \mathcal {C})}\Vert ^2$.
As training Bertram end-to-end requires much computation (processing a single training instance $(w,\mathcal {C})$ is as costly as processing an entire batch of $|\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:
We train only the form part, i.e. our loss for a single example $(w, \mathcal {C})$ is $\Vert e_w - v^\text{form}_{(w, \mathcal {C})} \Vert ^2$.
We train only the context part, minimizing $\Vert e_w - A \cdot v^\text{context}_{(w, \mathcal {C})} \Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.
We combine the pretrained form-only and context-only model and train all additional parameters.
Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed.
Generation of Rare Word Datasets
To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:
A model that has no understanding of rare words at all should perform close to 0%.
A model that perfectly understands rare words should be able to classify every instance correctly.
Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.
To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\mathcal {D}$ containing labelled instances $(\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set.
Generation of Rare Word Datasets ::: Dataset Splitting
We partition $\mathcal {D}$ into a train set $\mathcal {D}_\text{train}$ and a set of test candidates, $\mathcal {D}_\text{cand}$, with the latter containing all instances $(\mathbf {x},y) \in \mathcal {D}$ such that for at least one word $w$ in $\mathbf {x}$, $S(w) \ne \emptyset $. Additionally, we require that the training set consists of at least one third of the entire data.
Generation of Rare Word Datasets ::: Baseline Training
We finetune $M$ on $\mathcal {D}_\text{train}$. Let $(\mathbf {x}, y) \in \mathcal {D}_\text{train}$ where $\mathbf {x} = w_1, \ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:
We randomly replace 5% of all words in $\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.
As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.
We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance.
Generation of Rare Word Datasets ::: Test Set Generation
Let $p(y \mid \mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\mathbf {x}$, and let
be the model's prediction for input $\mathbf {x}$ where $\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\mathbf {x}, y) \in \mathcal {D}_\text{cand}$ with $M(\mathbf {x}) = y$. For each such entry, let $\mathbf {x} = w_1, \ldots , w_n$ and let $\mathbf {x}_{w_i = t}$ be the sequence obtained from $\mathbf {x}$ by replacing $w_i$ with $t$. We compute
i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\mathbf {x}_{w_i = \texttt {[MASK]}}) \ne y$ –, we select a random rare synonym $\hat{w}_i \in S(w_i)$ and add $(\mathbf {x}_{w_i = \hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ have the following properties:
If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.
If the model's internal representation of each $\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.
It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1.
Evaluation ::: Setup
For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.
While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference.
Evaluation ::: WNLaMPro
We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like
and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.
Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\text{base}$ model, our architecture even outperforms a standalone BERT$_\text{large}$ model by a large margin.
Evaluation ::: Downstream Task Datasets
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.
Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\mathbf {e} = e_1, \ldots , e_n$ where $e_{i}, \ldots , e_{i+j}$ with $1 \le i \le i+j \le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\mathbf {e}$ with
By default, the set of contexts $\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.
In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:
As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.
To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ where each substituted word $\hat{w}_{i_j}$ occurs less than $c_\text{max}$ times in WWC and BooksCorpus, for various values of $c_\text{max}$. As one would expect, $c_\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered.
Conclusion
We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.
As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts. | MNLI, AG's News, DBPedia |
537b2d7799124d633892a1ef1a485b3b071b303d | 537b2d7799124d633892a1ef1a485b3b071b303d_0 | Q: What is dataset for word probing task?
Text: Introduction
As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.
With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.
Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:
For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.
It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.
Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.
To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.
For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.
Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.
In summary, our contributions are as follows:
We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.
We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.
We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline.
Related Work
Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.
Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.
There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.
Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms.
Model ::: Form-Context Model
We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\text{form} \in \mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\text{context} \in \mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate
with parameters $w \in \mathbb {R}^{2d}, b \in \mathbb {R}$ and $\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.
The final representation of $w$ is then simply a weighted sum of form and context embeddings:
where $\alpha = g(v_{(w,C)}^\text{form}, v_{(w,C)}^\text{context})$ and $A$ is a $d\times d$ matrix that is learned during training.
While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way.
Model ::: Bertram
To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\mathbf {e} = e_1, \ldots , e_n$, we denote by $\textbf {h}_j^l(\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\mathbf {e}$ as input.
Given a word $w$ and a context $C = w_1, \ldots , w_n$ in which it occurs, let $\mathbf {t} = t_1, \ldots , t_{m}$ with $m \ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define
where $\mathbf {e} = e_{t_1}, \ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:
Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:
As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.
Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:
We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.
tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth
As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \cdot v^\text{context}_{(w,C)}$.
However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define
with $A^{\prime } \in \mathbb {R}^{d \times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as
where $\alpha = g(h^\text{form}_{(w,C)}, h^\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).
As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\mathcal {C} = \lbrace C_1, \ldots , C_m\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19.
Model ::: Training
Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\mathcal {C}$, Bertram is trained to minimize $\Vert e_w - v_{(w, \mathcal {C})}\Vert ^2$.
As training Bertram end-to-end requires much computation (processing a single training instance $(w,\mathcal {C})$ is as costly as processing an entire batch of $|\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:
We train only the form part, i.e. our loss for a single example $(w, \mathcal {C})$ is $\Vert e_w - v^\text{form}_{(w, \mathcal {C})} \Vert ^2$.
We train only the context part, minimizing $\Vert e_w - A \cdot v^\text{context}_{(w, \mathcal {C})} \Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.
We combine the pretrained form-only and context-only model and train all additional parameters.
Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed.
Generation of Rare Word Datasets
To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:
A model that has no understanding of rare words at all should perform close to 0%.
A model that perfectly understands rare words should be able to classify every instance correctly.
Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.
To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\mathcal {D}$ containing labelled instances $(\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set.
Generation of Rare Word Datasets ::: Dataset Splitting
We partition $\mathcal {D}$ into a train set $\mathcal {D}_\text{train}$ and a set of test candidates, $\mathcal {D}_\text{cand}$, with the latter containing all instances $(\mathbf {x},y) \in \mathcal {D}$ such that for at least one word $w$ in $\mathbf {x}$, $S(w) \ne \emptyset $. Additionally, we require that the training set consists of at least one third of the entire data.
Generation of Rare Word Datasets ::: Baseline Training
We finetune $M$ on $\mathcal {D}_\text{train}$. Let $(\mathbf {x}, y) \in \mathcal {D}_\text{train}$ where $\mathbf {x} = w_1, \ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:
We randomly replace 5% of all words in $\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.
As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.
We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance.
Generation of Rare Word Datasets ::: Test Set Generation
Let $p(y \mid \mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\mathbf {x}$, and let
be the model's prediction for input $\mathbf {x}$ where $\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\mathbf {x}, y) \in \mathcal {D}_\text{cand}$ with $M(\mathbf {x}) = y$. For each such entry, let $\mathbf {x} = w_1, \ldots , w_n$ and let $\mathbf {x}_{w_i = t}$ be the sequence obtained from $\mathbf {x}$ by replacing $w_i$ with $t$. We compute
i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\mathbf {x}_{w_i = \texttt {[MASK]}}) \ne y$ –, we select a random rare synonym $\hat{w}_i \in S(w_i)$ and add $(\mathbf {x}_{w_i = \hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ have the following properties:
If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.
If the model's internal representation of each $\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.
It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1.
Evaluation ::: Setup
For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.
While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference.
Evaluation ::: WNLaMPro
We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like
and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.
Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\text{base}$ model, our architecture even outperforms a standalone BERT$_\text{large}$ model by a large margin.
Evaluation ::: Downstream Task Datasets
To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.
Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\mathbf {e} = e_1, \ldots , e_n$ where $e_{i}, \ldots , e_{i+j}$ with $1 \le i \le i+j \le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\mathbf {e}$ with
By default, the set of contexts $\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.
In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:
As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.
To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\mathbf {x}_{w_{i_1} = \hat{w}_{i_1}, \ldots , w_{i_k} = \hat{w}_{i_k} }, y)$ where each substituted word $\hat{w}_{i_j}$ occurs less than $c_\text{max}$ times in WWC and BooksCorpus, for various values of $c_\text{max}$. As one would expect, $c_\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered.
Conclusion
We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.
As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts. | WNLaMPro dataset |
9aca4b89e18ce659c905eccc78eda76af9f0072a | 9aca4b89e18ce659c905eccc78eda76af9f0072a_0 | Q: How fast is the model compared to baselines?
Text: Introduction
Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.
Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions "France", "Croatia" and "2018 World Cup", and each mention has three candidate entities. Here, "France" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of "France" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer "France" to France national basketball team. So, how to solve these problems?
We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map "2018 World Cup" to 2018 FIFA World Cup based on their common contextual words "France", "Croatia", "4-2". Then, it is obvious that "France" and "Croatia" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.
Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.
In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.
Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.
In summary, the main contributions of our paper mainly include following aspects:
Methodology
The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules.
Preliminaries
Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \lbrace m_1, m_2,...,m_k\rbrace $ , each mention $ m_t \in D$ has a set of candidate entities $C_{m_t} = \lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return "NIL" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.
Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention.
Local Encoder
Given a mention $m_t$ and the corresponding candidate set $\lbrace e_t^1, e_t^2,..., \\ e_t^k\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\lbrace w_c^1, w_c^2,..., w_c^n\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .
After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations:
$$\Psi (m_t, e_t^i) = MLP(V_{m_t}\oplus {V_{e_t^i}})$$ (Eq. 9)
Where $\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows:
$$L_{local} = max(0, \gamma -\Psi (m_t, e_t^+)+\Psi (m_t, e_t^-))$$ (Eq. 10)
When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\gamma $ higher than that of negative candidate entity $e_t^-$ .
With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder.
Global Encoder
In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .
As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \lbrace e_i^1, e_i^2,..., e_i^n\rbrace $ . We use $\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\Psi _{max}(m_i, e_i^a) > \Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.
Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model.
$$L_{global} = -\frac{1}{n}\sum _x{\left[y\ln {y^{^{\prime }}} + (1-y)\ln (1-y^{^{\prime }})\right]}$$ (Eq. 12)
Where $y\in \lbrace 0,1\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\prime }}\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections.
Entity Selector
In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.
The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows:
$$S_t = V_{m_i}^t\oplus {V_{e_i}^t}\oplus {V_{feature}^t}\oplus {V_{e^*}^{t-1}}$$ (Eq. 15)
Where $\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \in \mathbb {R}^{1\times {n}}$ to $V_{m_i}^t{^{\prime }} \in \mathbb {R}^{k\times {n}}$ and then combine it with $V_{e_i}^t \in \mathbb {R}^{k\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.
According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \in \lbrace 0,1,2...k\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.
The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not.
$$R(a_t) = p(a_t)\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16)
where $p(a_t)\in \lbrace 0,1\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\sum _{j=t}^{T}p(a_j)$ and $\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.
After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\pi _{\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by:
$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17)
Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows:
$$\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18)
Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as:
$$\begin{split} J(\Theta ) &= \mathbb {E}_{(s_t, a_t){\sim }P_\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\ &=\sum _{t}\sum _{a}\pi _{\Theta }(a|s)R(a_t) \end{split}$$ (Eq. 19)
Where $P_\Theta {(s_t, a_t)}$ is the state transfer function, $\pi _{\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9.
$$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$$ (Eq. 20)
As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.
[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \lbrace D_1, D_2, ..., D_N\rbrace $ The target entity for mentions $\Gamma = \lbrace T_1, T_2, ..., T_N\rbrace $
Initialize the policy network parameter $\Theta $ , global LSTM network parameter $\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \lbrace S_1, S_2, ..., S_N\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \lbrace m_1, m_2, ..., m_n\rbrace $ in $S_k$ based on the local similarity; $\Phi $0 in $\Phi $1 Sample the target entity $\Phi $2 for $\Phi $3 with $\Phi $4 ; Input the $\Phi $5 and $\Phi $6 to global LSTM network; $\Phi $7 End of sampling, update parameters Compute delayed reward $\Phi $8 for each action; Update the parameter $\Phi $9 of policy network:
$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$
Update the parameter $\Phi $ in the global LSTM network
Experiment
In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU.
Experiment Setup
We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
OURSELF-WIKI is crawled by ourselves from Wikipedia pages.
During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.
We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network.
Comparing with Previous Work
We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.
We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics:
$$Accuracy = \frac{|M \cap M^*|}{|M \cup M^*|}$$ (Eq. 31)
$$Precision = \frac{|M \cap M^*|}{|M|}$$ (Eq. 32)
where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.
Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.
From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.
For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish.
Discussion on different RLEL variants
To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.
A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.
In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.
Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.
To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies.
Case Study
Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information.
Related Work
The related work can be roughly divided into two groups: entity linking and reinforcement learning.
Entity Linking
Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.
In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.
To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation.
Reinforcement Learning
In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework.
Conclusions
In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.
This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809
Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466). | Unanswerable |
b0376a7f67f1568a7926eff8ff557a93f434a253 | b0376a7f67f1568a7926eff8ff557a93f434a253_0 | Q: How big is the performance difference between this method and the baseline?
Text: Introduction
Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.
Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions "France", "Croatia" and "2018 World Cup", and each mention has three candidate entities. Here, "France" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of "France" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer "France" to France national basketball team. So, how to solve these problems?
We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map "2018 World Cup" to 2018 FIFA World Cup based on their common contextual words "France", "Croatia", "4-2". Then, it is obvious that "France" and "Croatia" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.
Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.
In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.
Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.
In summary, the main contributions of our paper mainly include following aspects:
Methodology
The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules.
Preliminaries
Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \lbrace m_1, m_2,...,m_k\rbrace $ , each mention $ m_t \in D$ has a set of candidate entities $C_{m_t} = \lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return "NIL" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.
Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention.
Local Encoder
Given a mention $m_t$ and the corresponding candidate set $\lbrace e_t^1, e_t^2,..., \\ e_t^k\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\lbrace w_c^1, w_c^2,..., w_c^n\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .
After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations:
$$\Psi (m_t, e_t^i) = MLP(V_{m_t}\oplus {V_{e_t^i}})$$ (Eq. 9)
Where $\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows:
$$L_{local} = max(0, \gamma -\Psi (m_t, e_t^+)+\Psi (m_t, e_t^-))$$ (Eq. 10)
When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\gamma $ higher than that of negative candidate entity $e_t^-$ .
With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder.
Global Encoder
In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .
As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \lbrace e_i^1, e_i^2,..., e_i^n\rbrace $ . We use $\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\Psi _{max}(m_i, e_i^a) > \Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.
Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model.
$$L_{global} = -\frac{1}{n}\sum _x{\left[y\ln {y^{^{\prime }}} + (1-y)\ln (1-y^{^{\prime }})\right]}$$ (Eq. 12)
Where $y\in \lbrace 0,1\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\prime }}\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections.
Entity Selector
In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.
The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows:
$$S_t = V_{m_i}^t\oplus {V_{e_i}^t}\oplus {V_{feature}^t}\oplus {V_{e^*}^{t-1}}$$ (Eq. 15)
Where $\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \in \mathbb {R}^{1\times {n}}$ to $V_{m_i}^t{^{\prime }} \in \mathbb {R}^{k\times {n}}$ and then combine it with $V_{e_i}^t \in \mathbb {R}^{k\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.
According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \in \lbrace 0,1,2...k\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.
The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not.
$$R(a_t) = p(a_t)\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16)
where $p(a_t)\in \lbrace 0,1\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\sum _{j=t}^{T}p(a_j)$ and $\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.
After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\pi _{\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by:
$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17)
Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows:
$$\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18)
Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as:
$$\begin{split} J(\Theta ) &= \mathbb {E}_{(s_t, a_t){\sim }P_\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\ &=\sum _{t}\sum _{a}\pi _{\Theta }(a|s)R(a_t) \end{split}$$ (Eq. 19)
Where $P_\Theta {(s_t, a_t)}$ is the state transfer function, $\pi _{\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9.
$$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$$ (Eq. 20)
As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.
[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \lbrace D_1, D_2, ..., D_N\rbrace $ The target entity for mentions $\Gamma = \lbrace T_1, T_2, ..., T_N\rbrace $
Initialize the policy network parameter $\Theta $ , global LSTM network parameter $\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \lbrace S_1, S_2, ..., S_N\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \lbrace m_1, m_2, ..., m_n\rbrace $ in $S_k$ based on the local similarity; $\Phi $0 in $\Phi $1 Sample the target entity $\Phi $2 for $\Phi $3 with $\Phi $4 ; Input the $\Phi $5 and $\Phi $6 to global LSTM network; $\Phi $7 End of sampling, update parameters Compute delayed reward $\Phi $8 for each action; Update the parameter $\Phi $9 of policy network:
$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$
Update the parameter $\Phi $ in the global LSTM network
Experiment
In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU.
Experiment Setup
We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
OURSELF-WIKI is crawled by ourselves from Wikipedia pages.
During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.
We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network.
Comparing with Previous Work
We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.
We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics:
$$Accuracy = \frac{|M \cap M^*|}{|M \cup M^*|}$$ (Eq. 31)
$$Precision = \frac{|M \cap M^*|}{|M|}$$ (Eq. 32)
where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.
Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.
From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.
For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish.
Discussion on different RLEL variants
To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.
A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.
In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.
Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.
To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies.
Case Study
Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information.
Related Work
The related work can be roughly divided into two groups: entity linking and reinforcement learning.
Entity Linking
Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.
In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.
To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation.
Reinforcement Learning
In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework.
Conclusions
In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.
This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809
Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466). | Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores. |
dad8cc543a87534751f9f9e308787e1af06f0627 | dad8cc543a87534751f9f9e308787e1af06f0627_0 | Q: What datasets used for evaluation?
Text: Introduction
Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.
Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions "France", "Croatia" and "2018 World Cup", and each mention has three candidate entities. Here, "France" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of "France" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer "France" to France national basketball team. So, how to solve these problems?
We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map "2018 World Cup" to 2018 FIFA World Cup based on their common contextual words "France", "Croatia", "4-2". Then, it is obvious that "France" and "Croatia" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.
Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.
In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.
Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.
In summary, the main contributions of our paper mainly include following aspects:
Methodology
The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules.
Preliminaries
Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \lbrace m_1, m_2,...,m_k\rbrace $ , each mention $ m_t \in D$ has a set of candidate entities $C_{m_t} = \lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return "NIL" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.
Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention.
Local Encoder
Given a mention $m_t$ and the corresponding candidate set $\lbrace e_t^1, e_t^2,..., \\ e_t^k\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\lbrace w_c^1, w_c^2,..., w_c^n\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .
After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations:
$$\Psi (m_t, e_t^i) = MLP(V_{m_t}\oplus {V_{e_t^i}})$$ (Eq. 9)
Where $\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows:
$$L_{local} = max(0, \gamma -\Psi (m_t, e_t^+)+\Psi (m_t, e_t^-))$$ (Eq. 10)
When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\gamma $ higher than that of negative candidate entity $e_t^-$ .
With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder.
Global Encoder
In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .
As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \lbrace e_i^1, e_i^2,..., e_i^n\rbrace $ . We use $\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\Psi _{max}(m_i, e_i^a) > \Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.
Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model.
$$L_{global} = -\frac{1}{n}\sum _x{\left[y\ln {y^{^{\prime }}} + (1-y)\ln (1-y^{^{\prime }})\right]}$$ (Eq. 12)
Where $y\in \lbrace 0,1\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\prime }}\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections.
Entity Selector
In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.
The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows:
$$S_t = V_{m_i}^t\oplus {V_{e_i}^t}\oplus {V_{feature}^t}\oplus {V_{e^*}^{t-1}}$$ (Eq. 15)
Where $\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \in \mathbb {R}^{1\times {n}}$ to $V_{m_i}^t{^{\prime }} \in \mathbb {R}^{k\times {n}}$ and then combine it with $V_{e_i}^t \in \mathbb {R}^{k\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.
According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \in \lbrace 0,1,2...k\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.
The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not.
$$R(a_t) = p(a_t)\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16)
where $p(a_t)\in \lbrace 0,1\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\sum _{j=t}^{T}p(a_j)$ and $\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.
After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\pi _{\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by:
$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17)
Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows:
$$\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18)
Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as:
$$\begin{split} J(\Theta ) &= \mathbb {E}_{(s_t, a_t){\sim }P_\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\ &=\sum _{t}\sum _{a}\pi _{\Theta }(a|s)R(a_t) \end{split}$$ (Eq. 19)
Where $P_\Theta {(s_t, a_t)}$ is the state transfer function, $\pi _{\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9.
$$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$$ (Eq. 20)
As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.
[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \lbrace D_1, D_2, ..., D_N\rbrace $ The target entity for mentions $\Gamma = \lbrace T_1, T_2, ..., T_N\rbrace $
Initialize the policy network parameter $\Theta $ , global LSTM network parameter $\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \lbrace S_1, S_2, ..., S_N\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \lbrace m_1, m_2, ..., m_n\rbrace $ in $S_k$ based on the local similarity; $\Phi $0 in $\Phi $1 Sample the target entity $\Phi $2 for $\Phi $3 with $\Phi $4 ; Input the $\Phi $5 and $\Phi $6 to global LSTM network; $\Phi $7 End of sampling, update parameters Compute delayed reward $\Phi $8 for each action; Update the parameter $\Phi $9 of policy network:
$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$
Update the parameter $\Phi $ in the global LSTM network
Experiment
In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU.
Experiment Setup
We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
OURSELF-WIKI is crawled by ourselves from Wikipedia pages.
During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.
We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network.
Comparing with Previous Work
We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.
We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics:
$$Accuracy = \frac{|M \cap M^*|}{|M \cup M^*|}$$ (Eq. 31)
$$Precision = \frac{|M \cap M^*|}{|M|}$$ (Eq. 32)
where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.
Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.
From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.
For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish.
Discussion on different RLEL variants
To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.
A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.
In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.
Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.
To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies.
Case Study
Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information.
Related Work
The related work can be roughly divided into two groups: entity linking and reinforcement learning.
Entity Linking
Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.
In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.
To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation.
Reinforcement Learning
In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework.
Conclusions
In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.
This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809
Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466). | AIDA-B, ACE2004, MSNBC, AQUAINT, WNED-CWEB, WNED-WIKI |